Time |
Nickname |
Message |
00:42
🔗
|
Angantyr |
http://www.thedailyshow.com/watch/mon-june-18-2012/parmy-olson |
00:42
🔗
|
Angantyr |
friendster joke:) |
01:50
🔗
|
dashcloud |
whoa- you can now buy a 90'' TV that's not rear projection- Sharp has an LCD TV at that size |
02:03
🔗
|
shaqfu |
Can't imagine the price |
02:04
🔗
|
shaqfu |
Or for that matter, heat |
02:05
🔗
|
dashcloud |
11k |
09:01
🔗
|
Schbirid |
since i failed to find a working compressed fuse fs i guess i can extract all the forums 7z files, make one tar and use a archive fs on that |
10:59
🔗
|
Schbirid |
and extracting will take a day or so |
10:59
🔗
|
Schbirid |
:) |
16:17
🔗
|
Coderjoe |
Schbirid: compressed FS is kinda tricky. I might be able to hack up something that works on gz or bz2, though. (and if there is tar inside it, I guess you'd need a separate layer for that, too) |
16:18
🔗
|
Schbirid |
i'll probably use avfs. used it successfully in the past. i wonder how it will handle an archive this big though |
16:18
🔗
|
Schbirid |
or archivemount |
16:18
🔗
|
Schbirid |
http://sourceforge.net/apps/mediawiki/fuse/index.php?title=ArchiveFileSystems says it supports tar.g |
17:16
🔗
|
Schbirid |
extraction is about half done :D |
17:41
🔗
|
Coderjoe |
my implementation of the bz2 or gz fuse layer would essentially serve up a single file (the decompressed file) with full seeking capabilities, without having to keep a full decompressed copy. |
17:43
🔗
|
Coderjoe |
(as it would do a decompress pass and build an index. for bz2 it would simply be an index of bz2 position to decompressed position. for gz it would be gz position, decompressed position, and however many bytes are needed in the window to begin decompression at that point) |
17:45
🔗
|
Coderjoe |
the tar fuse module would keep an index of the files in the tar file, so it could serve up directory listings quickly, as well as get file contents quickly |
17:45
🔗
|
Coderjoe |
even with multi-gigabyte tar files |
18:07
🔗
|
ersi |
http://news.ycombinator.com/item?id=4136682 |
18:07
🔗
|
ersi |
Oh god |
18:50
🔗
|
Coderjoe |
"oh god" what? |
18:54
🔗
|
ersi |
imo full of facepalminess |
19:34
🔗
|
Schbirid |
yay, 10 hours later, the forums are extracted |
19:39
🔗
|
Schbirid |
is it better to tar first and gzip later? or how does gzip know how to compress well? does it even care? |
19:40
🔗
|
Schbirid |
i have a LOT of duplication |
19:40
🔗
|
Schbirid |
usually close together on a filename basis though |
19:54
🔗
|
Coderjoe |
gzip just compresses everything it gets as one stream. tar.gz files are tarred then gzipped, and the method is referred to as a solid archive |
19:55
🔗
|
Coderjoe |
you might get results by sorting the list of files in some way and feeding the list to tar, which will result in placing the similar files close together, so the compressor might benefit (depending on window size. gzip's deflate will only have a 32KB window at most) |
20:00
🔗
|
Schbirid |
i just did "tar -czf forums.tar.gz forums" so far, let's wait what comes out of it :) |
20:02
🔗
|
Coderjoe |
the z causes tar to pass its own output to "gzip -9c" and take the output of that to pass as the output of tar |
20:10
🔗
|
Coderjoe |
(moving from #archiveteam) this particular article has a supposed "original set" of rules. 1 was "do not talk about rules 2-33" rather than "do not talk about /b/ |
20:10
🔗
|
Coderjoe |
" |
20:10
🔗
|
Coderjoe |
and then further down they have a later set with the usual 1/2/34/35 |