#archiveteam-bs 2017-06-05,Mon

↑back Search ←Prev date Next date→ Show only urls(Click on time to select a line by its url)

WhoWhatWhen
timmcThe WARC had the space in it? [00:00]
JAAYes. [00:00]
timmcNow I wonder what browsers do when confronted with that. [00:00]
JAAThey'll handle it fine, probably.
Browsers are developed to handle all sorts of crap thrown at them by badly written web servers.
(Unfortunately, because that means that the web servers never get fixed to conform to the standards.)
But I guess the IA library which handles this stuff is more strict.
[00:00]
timmcOK, can confirm, Firefox is fine with it. [00:06]
voidstavvvvvvv/13
oops
[00:09]
xmchi [00:14]
voidstahello [00:14]
JAALooks like others have experienced this problem of web servers including whitespace in the chunk size before: https://webcache.googleusercontent.com/search?q=cache:https%3A%2F%2Fjava.net%2Fjira%2Fbrowse%2FGRIZZLY%2D1684 (java.net shut down recently :-| ) [00:14]
joepie91JAA: timmc: chunk sizes in the WARCs are padded to multiples of 3 hex chars
using spaces
(in my report, they're represented as dots)
[00:15]
JAAYeah, but not always.
Hmm, or maybe it is always.
[00:17]
joepie91from what I could see, it's always
just not all numbers in the source are chunk sizes :)
[00:19]
JAAFound the Apache bug: https://bz.apache.org/bugzilla/show_bug.cgi?id=41364
Although it seems unlikely that they were still using that version last year. :-P
Someone claims there that "The spaces padding the hex value are ok according to rfc2616"
[00:21]
timmcjoepie91: Yeah, I tested it with the added spaces and a fake HTTP server. [00:25]
JAAI'm glad that there are standards, but I hate the standards. They're so hard to read at times. [00:26]
timmcugh implied LWS [00:26]
JAAYes, but...
RFC 2616 was obsoleted by 7230. 7230 uses ABNF from RFC 5234.
5234 specifically states that "This specification for ABNF does not provide for implicit specification of linear white space." and "Any grammar that wishes to permit linear white space around delimiters or string segments must specify it explicitly."
And I can't find that in 7230 anywhere.
Actually, it states explicitly: "Rules about implicit linear whitespace between certain grammar productions have been removed; now whitespace is only allowed where specifically defined in the ABNF."
[00:26]
JRWRman the scaleway API is strange [00:29]
***Ravenloft has quit IRC (Read error: Operation timed out) [00:30]
.... (idle for 15mn)
VADemon has quit IRC (Read error: Connection reset by peer) [00:45]
joepie91timmc: LWS? [00:45]
JAALinear white space
Meaning white space (0x20) and horizontal tabs (0x09).
[00:45]
timmcJAA: Too little too late, I suppose. [00:48]
JAASo here's what I think is happening on those pages: Apache, portalgraphics's web server, for some reason pads the chunk sizes with spaces to multiples of three. Since almost all clients probably support RFC 2616 (backwards compatibility etc.), this isn't actually a problem, although it isn't exactly conformant with the most up-to-date standards. (portalgraphics may have been using an old version of Apa
che though from before the release of RFC 7230.)
However, the IA library handling HTTP responses uses RFC 7230 and therefore doesn't allow whitespace after the chunk size. It fails to decode it and handles it as raw data instead, in effect simply dropping the "Transfer-Encoding: chunked" header.
Which then leads to "garbage" showing up in the final response from the IA.
[00:48]
JRWRAnyone want my scaleway script
it depolys 7 grab scripts at a time
using the arm64 instances
(2.99Euro)
[00:53]
voidstasure, share it :) [00:54]
JAABy the way, if you request the id_ resource for that link I gave above, the IA sends it in chunked transfer encoding again; the raw traffic back from IA to the browser then looks like double-chunk-encoded: https://gist.githubusercontent.com/anonymous/accf1455050dcf01f19a3b6d1f7cf658/raw/89f5ab19945c49e3770bb6571e36b9f2ae8f1594/gistfile1.txt [00:54]
JRWRand voidsta a script to clean up all the servers as well [00:55]
voidstaJRWR: cool :) [00:55]
JAAJRWR: I'd love to see how you automated it, although I won't be using it directly. I've been meaning to look into how to make the whole process of joining a new project a bit easier across multiple machines. [00:55]
JRWRya
its simple as fuck really
[00:55]
voidstasame [00:56]
JRWRhttps://gist.github.com/JRWR/4b1cdbe0f55f00d92c10ff1e2355c5b7
there you go
thats both scripts
updated to show my script.sh it sent to the servers
[01:00]
***ajft has joined #archiveteam-bs [01:01]
JRWRmostly its the default one with a screen -dm on the run-pipeline [01:01]
voidstacool, thanks for sharing [01:02]
JRWRits very shotgun style
but it gets the job done
[01:03]
***ndiddy has quit IRC () [01:05]
JAAThanks, I'll have a look at it tomorrow.
"Some twats drove a van into pedestrians and stabbed people. But don't despair, this will never happen again once we start regulating the internet." FFS, Theresa...
[01:05]
***j08nY has quit IRC (Quit: Leaving) [01:12]
xmcseems reasonable
worked here in the usa
[01:14]
joepie91JAA: interestingly, the chunk cutoffs happened in specific places a lot. I wonder whether you can infer where variables were used (in string-concatenation, in PHP) from the chunk cutoff poiints
JAA: also, if that theory is correct, it'd be relatively simple to fix all the WARCs
[01:19]
xmcif modifying warcs is acceptable.
tbh i'm on the fence about that
even in this case
[01:21]
JAAI guess it might be possible to infer something about internal buffer sizes etc.
Agreed. The WARCs contain an accurate representation of what the web server delivered to clients. The fact that some clients or libraries can't handle it is secondary to preserving the original data in my opinion.
[01:22]
***ndiddy has joined #archiveteam-bs [01:23]
JAAWe definitely need to get in touch with the IA though so they can fix their software if my assumptions above are correct. [01:25]
***BlueMaxim has joined #archiveteam-bs [01:25]
JAAI'm sure they have tons of other pages with the same "bug". [01:25]
xmcquite possibly! [01:27]
***schbirid2 has joined #archiveteam-bs
schbirid has quit IRC (Read error: Operation timed out)
[01:36]
joepie91okay, that was poorly worded
it'd be relatively simple to fix the wayback output*
:p
cc JAA xmc
definitely not advocating for [irrevocably] modifying source data
[01:42]
xmcah yep [01:42]
joepie91but eg. storing a 'fixed' copy of the WARC can be desirable for perf purposes (over fixing stuff on-the-fly in the wayback)
without touching the original
[01:43]
***icedice has quit IRC (Ping timeout: 250 seconds) [01:53]
JRWRJRWR spins up 100 instances
oops
[01:54]
voidsta:) [02:03]
***pizzaiolo has quit IRC (Ping timeout: 260 seconds) [02:04]
..... (idle for 22mn)
JRWR has quit IRC (Quit: Page closed) [02:26]
.......... (idle for 49mn)
superkuh has quit IRC (Quit: the neuronal action potential is an electrical manipulation of reversible abrupt phase changes in the lipid bilaye) [03:15]
Aranje has quit IRC (Quit: Three sheets to the wind) [03:24]
Sk1d has joined #archiveteam-bs [03:34]
ndiddy has quit IRC () [03:44]
ajft has left [03:51]
.... (idle for 17mn)
slyphic has quit IRC (Read error: Operation timed out)
slyphic has joined #archiveteam-bs
[04:08]
MrRadarOver in #outofsteam DoomTay noticed that SPUF was returning data with chunked encoding when he used wpull to grab their front page
I checked on my end and found out that they did that for both wpull and wget-lua if a custom user-agent is not specified
With the ArchiveTeam user-agent SPUF returns data without chunking
I've also verified that both wpull and wget-lua are producing WARCs with the same corruption as portalgraphics when SPUF returns data with chunked transfers
[04:13]
We've also figured out that using a browser User-agent still results in chunked transfers, but adding an Accept header like an actual browser would will cause it to switch back to non-chunked transfers [04:24]
Can someone with access to the tracker please stop Pixiv?
The chunked transfer issue is 100% affecting grabs from them
xmc, arkiver, SketchCow: ^^^
[04:29]
I'm grabbing the latest unpatched wget to see if that has the same issue as wpull and wget-lua [04:37]
This issue affects the official wget 1.19 release [04:46]
OK, I've tracked down the bug in the wget source code.
The way WARC writing works in wget is there are two output files passed to the fd_read_body() function
The first gets only the main content, the second gets both the content and headers
WARC output uses stores the data from the second stream into the WARC.
However, as the comment on the function says: "If OUT2 is non-NULL, the contents is also written to OUT2. OUT2 will get an exact copy of the response: if this is a chunked response, everything -- including the chunk headers -- is written to OUT2. (OUT will only get the unchunked response.)"
So it's a deliberate design decision to dump the chunked transfer size as part of the WARC output
[04:52]
voidstaso, not a bug? [04:55]
MrRadarActually, looks like it is a bug [04:56]
voidstahm [04:56]
MrRadarAccording to the "WARC_ISO_28500_final_draft v018" document I found: "The payload of a 'response' record with a target-URI of scheme 'http' or 'https' is defined as its 'entity-body' (per [RFC2616]), with any transfer-encoding removed. If a truncated 'response' record block contains less than the full entity-body, the payload is considered truncated at the same position."
The "with any transfer-encoding" removed bit indicates that this is non-compliant behavior on the part of wget
As the chunked-transfer header would count as part of the transfer-encoding
Unless they mean something completely different than the HTTP spec when they are referring to "transfer-encoding"
(The document can be found here: http://archive-access.sourceforge.net/warc/WARC_ISO_28500_final_draft%20v018%20Zentveld%20080618.doc)
Well, I need to get to bed. See you in the morning
MrRadar is AFK
[04:56]
***Sk1d has quit IRC (Ping timeout: 194 seconds) [05:01]
ItsYoda has joined #archiveteam-bs
Sk1d has joined #archiveteam-bs
[05:06]
godaneso i have only uploaded 37k items this year so far [05:15]
.... (idle for 16mn)
***JRWR has joined #archiveteam-bs [05:31]
JRWRsomething is going down
MrRadar: do you confirm?
[05:31]
MrRadarI'm not 100% sure [05:32]
JRWRill point my webserver to the ingress folder if you want to start checking [05:32]
MrRadarwget is saving the chunked transfer headers into the WARCs but I'm pretty sure that's against the WARC spec
But I'm definitely not an expert on the WARC spec
We'd probably need to hear for sure from somebody at the IA who knows the spec very well
I did confirm the WARCs I was uploading for Pixiv contained the hex garbage
[05:32]
JRWRShit
I want a confirm with a OP
but I will keep the ingress in case of something crazy happening
MrRadar: http://spacescience.tech/warc/incoming-uploads/
you can start checking if you want
[05:35]
MrRadarPicking at random this file chunked transfer headers in the roomtop.php response body: http://spacescience.tech/warc/incoming-uploads/JRWR/pixiv-roomtop_100594-20170605-044020.warc.gz [05:38]
JRWRya I see them too
Interesting
im looking to see if there are any issues with the dumps
Yep found some
FUCK
http://spacescience.tech/warc/incoming-uploads/Abel_LF/pixiv-roomtop_618874-20170604-145848.warc.gz
Line 426
Shit
there is some in the AMFs
fffffffffffffff
I extracted all the static files
out of the 20, only 2 matched their SHA1s
These are bad dumps
Who do we ping MrRadar
http://imgur.com/6NfmQ
[05:39]
MrRadarI already tried pinging everyone who has tracker access, but none of them are online at the moment
In the mean time you could reduce your rsync to 1 connection max
Or just turn it off altogether
[05:46]
JRWRrsync is OFFLINE [05:49]
MrRadarJRWR: Which AMF files are you seeing with this issue? In the one you linked none of the AMF files were transferred with chunked encoding [05:49]
JRWRmy bad it was the PNGs [05:51]
MrRadarOK, yeah some of those are definitely affected [05:51]
JRWRfelt a great disturbance in the Force, as if millions of voices suddenly cried out in terror and were suddenly silenced. I fear something terrible has happened.
ok
We got to fix this in the meantime
its def Wget-lua doing this
[05:53]
MrRadarYeah, I actually tracked down the caust of the bug while you weren't online
Inside wget
[05:56]
JRWRGood
a simple fix is to disable http1.1
and ask for HTTP/1.0
but that does disable keepalives
wait, how many dumps have been going on over the years with this issue?
I wonder if anyone ever checked
[05:56]
MrRadarWhile I haven't verified with the git history, this looks like it's been a problem since WARC support was first added to wget [05:58]
godaneso i got a 256gb usb stick Saturday
for $45
[05:58]
JRWRso..
thats ALL the dumps?
[05:59]
MrRadarAny ones that have data transferred wiht the chunked transfer-encoding
Assuming my interpretation of the WARC spec is correct
[06:00]
JRWRhrm [06:00]
MrRadarGiven how extensive the issue is, it may be easier to just update the WARC spec to allow chunked transfer headers inside WARC response records [06:00]
JRWRtrue
so the hex we are seeing are the headers for the next chunk?
[06:00]
godaneso the wget WARC code was screwing things up? [06:02]
MrRadarIf I'm right, yes
But I'm not sure I am
When data is transferred with the HTTP "chunked" transfer-encoding, wget is writing the chunk headers into the WARC
[06:02]
godanebut wouldn't that cause the last few years of archiving to have problems [06:03]
pikhqUnless everyone's been misreading the spec the same way. [06:03]
MrRadarThe WARC spec says "The payload of a 'response' record with a target-URI of scheme 'http' or 'https' is defined as its 'entity-body' (per [RFC2616]), with any transfer-encoding removed." [06:04]
godanebut whatever this bug is its not with everything [06:04]
MrRadarYes, only when the web server uses "Transfer-encoding: chunked" [06:05]
pikhqNot a *lot* of things used chunked encoding. [06:05]
ranmais there a "best way" to back up a reddit post [06:05]
godaneok [06:05]
ranmawith a lot of collapsed comment threads? [06:05]
MrRadarLocally or with e.g. archivebot? [06:05]
ranmafor archive bat
bot
lol
e.g. https://www.reddit.com/r/apple/comments/6ezhwm/iama_foxconn_insider_with_information_on_next_12/dienjss/?context=3
er
https://www.reddit.com/r/apple/comments/6ezhwm/iama_foxconn_insider_with_information_on_next_12
[06:05]
godanethat at least means we shouldn't have alot of corrupt data [06:06]
MrRadar!a https://www.reddit.com/r/apple/comments/6ezhwm/iama_foxconn_insider_with_information_on_next_12/ without an ignore set should do the trick I think?
(Make sure to have the trailing slash)
[06:06]
pikhqgodane: It also implies it should be possible to find all of the data corrupted by this bug.
Though the act of finding all of it is definitely a big one just because of how much data there is to sift through...
[06:07]
ranmaMrRadar: isn't that going to hit all the linked sites
and then maybe a stupid number of other sites?
!a scares me
[06:10]
MrRadar!a only recurses into URLs with the same prefix
URLs with a different prefix will be visited but not recursively
[06:10]
***Igloo has quit IRC (Read error: Operation timed out) [06:11]
MrRadarThat's why the trailing slash would be so important, to limit the scope of the recursion [06:11]
ranmai've seen !a of example.com start to crawl marthastewart.com
hm
not sure if i used trailing slash
[06:11]
SketchCowWhat's the upshot of the bug [06:19]
MrRadarSketchCow: When web servers return data with "Transfer-encoding: chunked" wget is saving information into the WARC that (I think?) the spec says should be stipped
Specifically, the size of each data chunk
[06:20]
pikhqEverything sent from servers using chunked transfer encoding will have spurious hex digits and \r\n sequences in the data that were on the wire, but apparently WARC says aren't supposed to be there.
(that is, in the file itself)
[06:21]
MrRadarYou should ask someone at the IA who is familiar with the WARC format about what the right way to handle chunked transfers is
It's possible I'm just reading the spec wrong and wget is doing it right
[06:21]
pikhqhttps://github.com/iipc/warc-specifications/issues/22
That seems to imply you're reading the spec wrong.
[06:22]
MrRadarpikhq: Reading that discussion I think you're right [06:24]
pikhqAt the least, it's clear the *intention* is wget's behavior. [06:25]
MrRadarYes, they're very deliberately including the headers in the WARC [06:25]
pikhqSo, if you want to process WARC stuff (for rendering or what have you) you should probably be careful to take into account the transfer encoding, or else you'll get the spurious hex digits and such.
But if you're generating a WARC, that's supposed to be there.
[06:26]
MrRadarThat makes sense
Sorry for the false alarm everyone
JRWR: If you're still around, please restart your rsync target
[06:26]
pikhqNo worries. The standard text is genuinely confusing, and your interpretation is a valid one. [06:27]
JRWRBeen done already [06:27]
pikhq(at least, if you're not reading the exact same way they are) [06:27]
***JRWR_ has joined #archiveteam-bs
JRWR has quit IRC (Ping timeout: 268 seconds)
JRWR_ is now known as JRWR
[06:31]
JRWRSo overall that means IA's Wayback Machine doesn't follow the spec as well then [06:36]
MrRadarI think the issue with portalgraphics was they were sending slightly malformed chunked encoding headers
With extra padding?
That the IA didn't handle but browsers did
If my review of the logs is correct
*chat logs
[06:37]
JRWRSketchCow: Looks like we got blacklisted at pixiv [06:50]
MrRadararkiver: ^^^
It's not by IP since I can view URLs that fail through wget-lua just fine in my browser
Pixiv appears to be running again
[06:51]
JRWRYa
It looks like we got funneled
[06:55]
.... (idle for 15mn)
***Whopper_ has joined #archiveteam-bs
Whopper has quit IRC (Ping timeout: 268 seconds)
[07:10]
.......... (idle for 47mn)
SHODAN_UI has joined #archiveteam-bs [08:00]
......... (idle for 44mn)
Nazca_ has joined #archiveteam-bs [08:44]
Nazca_funneled is good or bad? [08:45]
***Nazca has quit IRC (Read error: Operation timed out)
Nazca_ is now known as Nazca
[08:45]
Igloo has joined #archiveteam-bs [08:55]
..... (idle for 22mn)
godaneDonald Trump on Charlie Rose: https://archive.org/details/Charlie-Rose-1992-11-06 [09:17]
***kristian_ has joined #archiveteam-bs
jtn2 has joined #archiveteam-bs
jtn2 has quit IRC (Read error: Operation timed out)
jtn2 has joined #archiveteam-bs
SHODAN_UI has quit IRC (Remote host closed the connection)
[09:24]
godanei'm close to half way point of uploads from last month
i only uploaded 955 items last month
i was grabbing the Mister Rogers stream and ripping tape this past month
[09:35]
***jtn2 has quit IRC (Read error: Operation timed out)
jtn2 has joined #archiveteam-bs
[09:40]
...... (idle for 25mn)
jtn2 has quit IRC (Read error: Operation timed out) [10:07]
JAA06-05 06:37:06 < MrRadar> I think the issue with portalgraphics was they were sending slightly malformed chunked encoding headers -- Yes, that's how I understand it. Interesting that the WARC should have transfer encoding stripped. I guess it makes sense in a way though. [10:12]
***jtn2 has joined #archiveteam-bs [10:18]
JAABut all in all, I don't think we need to stop current projects or anything like that. It wouldn't be hard to fix WARCs retroactively at some point if we want to do that.
joepie91: Fixing it in the Wayback Machine should be easy. IA's library for handling HTTP responses in WARC files already deals with chunked encoding, just not with this "malformed" variant. No need to update WARCs or anything; instead, the library should be modified to handle the whitespace padding.
[10:23]
***j08nY has joined #archiveteam-bs [10:27]
SanquiJAA: can you make some sort of writeup so this information doesn't get lost if somebody doesn't get to it right away? [10:27]
JAASanqui: Yeah, sure. [10:29]
***jtn2 has quit IRC (Ping timeout: 250 seconds)
jtn2 has joined #archiveteam-bs
BlueMaxim has quit IRC (Read error: Operation timed out)
BlueMaxim has joined #archiveteam-bs
[10:42]
.......... (idle for 45mn)
BlueMaxim has quit IRC (Ping timeout: 600 seconds)
BlueMaxim has joined #archiveteam-bs
[11:29]
...... (idle for 25mn)
SHODAN_UI has joined #archiveteam-bs [11:55]
kristian_ has quit IRC (Quit: Leaving) [12:07]
........ (idle for 36mn)
tfgbd_znc has quit IRC (Ping timeout: 600 seconds) [12:43]
JAAAnyone want to archive this? ;-) https://www.bleepingcomputer.com/news/security/hadoop-servers-expose-over-5-petabytes-of-data/ [12:52]
***BlueMaxim has quit IRC (Quit: Leaving) [12:53]
............ (idle for 56mn)
superkuh has joined #archiveteam-bs [13:49]
joepie91"To put things in perspective, HDFS servers leak 200 times more data compared to MongoDB servers, which are ten times more prevalent."
~big data~
JAA: hmm. the WARC stores the original chunked data in the WARC?
ie. the stream of bytes as it appeared over the wire
(as opposed to it beiing turned into just the content)
[13:57]
JRWRI do find that strange for a format like WARC [13:58]
joepie91MrRadar: JRWR: please make sure to confirm intended WARC behaviour with somebody who has access to the *final* WARC spec, to ensure that nothing was changed from the draft [14:02]
JRWRWe did [14:03]
joepie91JRWR: does something still need to be disabled on the tracker?
joepie91 has tracker access
(I'm still reading backlog)
[14:03]
JRWRthere is a issue open on the WARC Spec Github that explains the issue
and currently wget is correct in its saving
right now we are being throttled HARD by pixiv
442053done + 94722out + 463227to do
[14:03]
Kalroththey hit the anti-DDoS panic button [14:05]
joepie91JRWR: right, if something needs to be changed on the tracker and nobody is around, ping me :P
(pinging me on Freenode results in faster responses)
[14:07]
JRWRAh
Its OK for now, kind of wish pixiv had not throttled us
[14:07]
joepie91I'm going to be pretty busy today though, so preferably include a very precise request of what needs changing so that it's just a few clicks for me and doesn't require extra thinking :P [14:08]
JRWRof course joepie91
The only warning I've got on my dash right now is my storage is now half full
[14:11]
***pizzaiolo has joined #archiveteam-bs
icedice has joined #archiveteam-bs
[14:14]
MrRadarjoepie91: Yeah, after reading the spec issue on Github I initially reading the spec wrong and wget is doing the right thing
I was confused about what the portalgraphics issue was earlier
I missed that it was due to *extra whitespace* in their chunked transfer headers that was the issue
Not the headers themselves
[14:22]
.......... (idle for 45mn)
***SHODAN_UI has quit IRC (Quit: zzz)
SHODAN_UI has joined #archiveteam-bs
[15:08]
.... (idle for 19mn)
JAAjoepie91: Yes, as far as I can tell, wget and wpull store the raw data stream in the WARCs. In a way, that's exactly what I'd expect, although I can also see some arguments for stripping transfer encoding first.
On a related note, I find it interesting that TLS certificates aren't stored in WARCs.
[15:27]
joepie91JAA: that might just be a wget thing? I know that Heritrix stores a lot more stuff in WARCs than wget does, even down to DNS requests and responses [15:39]
JAAOh yeah, DNS as well.
That's very well possible.
joepie91: Do you have an example Heritrix WARC? I'd like to know how they store those things.
[15:40]
joepie91JAA: I don't, unfortunately. somebody in here has made some in the past
but that was a few years ago :)
[15:46]
***icedice has quit IRC (Ping timeout: 245 seconds) [15:56]
....... (idle for 32mn)
icedice has joined #archiveteam-bs [16:28]
......... (idle for 43mn)
JRWR has quit IRC (Ping timeout: 268 seconds) [17:11]
ZexaronS has joined #archiveteam-bs [17:16]
....... (idle for 34mn)
dashcloud has quit IRC (Ping timeout: 260 seconds)
fie has quit IRC (Read error: Operation timed out)
[17:50]
........ (idle for 37mn)
za3k has joined #archiveteam-bs [18:31]
za3k#internetarchive
i'm an idiot, ignore
[18:31]
***Rai-chan has quit IRC (Ping timeout: 268 seconds) [18:33]
za3kWhat I meant to say is: https://za3k.com/github/ is back up and actively archiving the summary metadata of github projects (mostly names and ids)
ghtorrent.org is pretty much strictly better, does anyone already have a copy?
[18:33]
***Jon has quit IRC (Ping timeout: 268 seconds)
Jon has joined #archiveteam-bs
Aoede has quit IRC (Ping timeout: 268 seconds)
purplebot has quit IRC (Ping timeout: 268 seconds)
Aoede has joined #archiveteam-bs
fie has joined #archiveteam-bs
[18:34]
purplebot has joined #archiveteam-bs
Rai-chan has joined #archiveteam-bs
[18:43]
.... (idle for 18mn)
SHODAN_UI has quit IRC (Remote host closed the connection) [19:01]
xmc has quit IRC (Read error: Operation timed out)
xmc has joined #archiveteam-bs
swebb sets mode: +o xmc
[19:06]
.... (idle for 19mn)
SketchCowFOS is now back to half-full, although you maniacs could probably fill it if you tried [19:28]
***JRWR has joined #archiveteam-bs [19:28]
za3k has quit IRC (Quit: http://chat.efnet.org (EOF)) [19:33]
....... (idle for 30mn)
zinozino whistles innocently. [20:03]
....... (idle for 34mn)
***gui7 has joined #archiveteam-bs
gui7 has left LIST
gui7 has joined #archiveteam-bs
gui7 has quit IRC (Remote host closed the connection)
gui7 has joined #archiveteam-bs
SHODAN_UI has joined #archiveteam-bs
[20:37]
.............. (idle for 1h8mn)
icedice has quit IRC (Quit: Leaving)
gui7 has quit IRC (Leaving.)
[21:48]
deathySketchCow: maybe update http://www.archiveteam.org/index.php?title=Rescuing_optical_media in case you know of better tools now? I'm also working through a backlog of personal CD/DVDs now... [21:53]
.......... (idle for 49mn)
***dashcloud has joined #archiveteam-bs [22:42]
.... (idle for 19mn)
yakfish has quit IRC (Operation timed out) [23:01]
SHODAN_UI has quit IRC (Remote host closed the connection) [23:06]
wp494deathy: anyone can [23:10]
***twigfoot has joined #archiveteam-bs [23:20]
.... (idle for 17mn)
Odd0002I used readom for my ISOs and they all seem to work fine in a windows 98SE VM [23:37]
***ndiddy has joined #archiveteam-bs
GLaDOS has joined #archiveteam-bs
[23:39]

↑back Search ←Prev date Next date→ Show only urls(Click on time to select a line by its url)