[00:40] *** ta9le has quit IRC (Quit: Connection closed for inactivity) [00:46] *** ndiddylap has joined #archiveteam-bs [02:07] *** Mateon1 has quit IRC (Read error: Operation timed out) [02:07] *** Mateon1 has joined #archiveteam-bs [02:14] *** ndiddy_ has joined #archiveteam-bs [02:18] *** ndiddylap has quit IRC (Read error: Operation timed out) [02:19] *** ndiddylap has joined #archiveteam-bs [02:21] *** apache2 has quit IRC (Remote host closed the connection) [02:21] *** apache2 has joined #archiveteam-bs [02:22] *** ndiddy_ has quit IRC (Read error: Operation timed out) [02:24] *** Tenebrae has quit IRC (Ping timeout: 260 seconds) [02:25] *** Tenebrae has joined #archiveteam-bs [02:25] *** plue has quit IRC (Ping timeout: 260 seconds) [02:26] *** plue has joined #archiveteam-bs [02:50] OK [02:50] I tend to upload when they stop growing for a while, if that matters [02:50] But I'm fine [03:17] *** qw3rty117 has joined #archiveteam-bs [03:23] *** qw3rty116 has quit IRC (Read error: Operation timed out) [03:41] *** odemg has quit IRC (Ping timeout: 260 seconds) [03:53] *** odemg has joined #archiveteam-bs [03:59] *** sep332 has quit IRC (Read error: Operation timed out) [04:32] *** ndiddylap has quit IRC (Read error: Operation timed out) [05:06] *** Pixi has quit IRC (Quit: Pixi) [05:06] *** Pixi has joined #archiveteam-bs [06:20] *** schbirid has joined #archiveteam-bs [07:15] *** schbirid has quit IRC (Quit: Leaving) [08:21] *** SmileyG_ has joined #archiveteam-bs [08:24] *** SmileyG has quit IRC (Ping timeout: 260 seconds) [09:44] SketchCow: ok [09:44] any word from Mank? [10:23] *** ta9le has joined #archiveteam-bs [11:32] JAA: It wasn't long ago when Jagex sold their business to the Chinese, or something. It's been downhill from there. They also increased costs of their subscription recently. [11:33] I see. [11:47] http://www.runescape.com/robots.txt [11:47] I guess I have no words [11:56] lol [12:46] I almost forgot. Jagex is also closing Ace of Spades in few days. [12:46] Ace of Spades, RuneScape Classic and FunOrb. Whee. [12:56] *** C4K3 has quit IRC (Read error: Operation timed out) [13:33] *** BlueMax has quit IRC (Leaving) [13:52] *** C4K3 has joined #archiveteam-bs [14:11] *** ndiddylap has joined #archiveteam-bs [14:13] *** balrog has quit IRC (Bye) [14:21] *** balrog has joined #archiveteam-bs [14:21] *** swebb sets mode: +o balrog [14:33] lmao this robots.txt [14:43] Frogging: Where? [14:43] http://www.runescape.com/robots.txt [14:43] oh, for runescape, lol [14:43] yeah [14:44] damn, keep running into this: error uploading at-00156.warc: We encountered an internal error. Please try again. - uploadItem.py [14:47] worked after four retries :/ [14:48] *** balrog has quit IRC (Bye) [14:50] *** balrog has joined #archiveteam-bs [14:50] *** swebb sets mode: +o balrog [14:55] tyzoid: where are you uploading these? [14:55] which item [14:57] arkiver: https://archive.org/download/tyzoid-acidplanet-audio [14:57] I'm going through and re-uploading the ones that failed to upload before [14:58] ah you're using the new wget [14:58] it writes different WARC headers than normal [14:58] whatever is latest on ubuntu 18.04, I assume it's new [14:58] https://archive.org/download/tyzoid-acidplanet-audio/tyzoid-acidplanet-audio.cdx.idx [14:58] Hi [14:59] lines from the IDX [14:59] 20180527113823 tyzoid-acidplanet-audio.cdx.gz 563522 183423 [14:59] note the that is wrong [14:59] is there a good reason why tarballs are excluded from most of the https://kernel.org archives? [14:59] we encountered this issue before, IA can't handle it (yet), I'll raise an issue [14:59] Sounds good [15:00] the point of archiving that website is so there are copies of older Linux Kernel sources [15:00] (this is not related to your uploading issue) [15:00] DragonMon: It's backed by git, so you can always git checkout to the tag [15:01] tyzoid: yes but is there a web archive of the git repo? [15:01] lol [15:01] DragonMon: Yes: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/ [15:02] not what I meant [15:02] I'm sure Linux is one of the most archived things around but shouldn't archive.org include the source tarballs? [15:02] you can go all the way back if you want: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/snapshot/linux-2.6.13-rc3.tar.gz [15:03] just grab all the snapshots [15:03] arkiver: Yeah, no problem. I just wrapped it in a loop to retry on nonzero return status [15:04] tyzoid: but why couldn't archive.org also get a copy of those? [15:04] the download links are broken if navigated in archive.org [15:06] *** moufu_ is now known as moufu [15:08] Ew, that uri definition problem from the WARC specification again. [15:08] yeah [15:08] raised an issue, shouldn't be too hard to fix in the derive process [15:09] proces* [15:09] So they changed wget to comply with WARC 1.0 strictly instead of moving to 1.1? [15:09] ... or just ignoring it, since all other tools don't include the angle brackets anyway. [15:10] DragonMon: seems like it's grabbing it "200 https://cdn.kernel.org/pub/linux/kernel/v4.x/linux-4.9.103.tar.xz" [15:10] tyzoid: I see a older archiveteam archive from a few days ago on archive.org and the links are broken [15:11] DragonMon: Link? [15:11] hang on [15:12] JAA: http://web.archive.org/web/20180529073551/https://git.kernel.org/torvalds/t/linux-4.17-rc7.tar.gz [15:12] https://web.archive.org/web/20180521085957/https://www.kernel.org/ [15:12] link clicked on from http://web.archive.org/web/20180529073551/https://www.kernel.org/ [15:13] right [15:13] DragonMon: Hmm, nobody grabbed kernel.org on that date directly. Most likely, it was just a link to kernel.org from another site. [15:13] ArchiveBot grabs external links, but it doesn't recurse on them for obvious reasons. [15:14] hmm [15:14] So if you grab example.org and example.org/kernel.html has a link to kernel.org, it'll grab kernel.org but not any links on it. [15:14] tyzoid: so it should show up on the most recent grab? [15:14] idk [15:14] perhaps [15:15] it's strange if it doesn't.... It's open source, it's meant to be saved and shared [15:15] It won't: https://cdn.kernel.org/robots.txt [15:15] JAA: I would imagine that the link shouldn't be broken, though, it'll get the nearest 20x response in time to the current archive [15:15] unless it's not archived at all [15:15] Correct. [15:15] I wonder what the idea is behind that, it seems odd [15:16] But in this case, it wouldn't work ever because cdn.kernel.org blocks the access to robots. [15:16] Probably to prevent unnecessary traffic from crawlers. [15:16] I see that but why limit grabs like that? ddos maybe? [15:16] JAA: tyzoid: well it's raised, I don't expect it to be too hard to fix, will keep you informed :) [15:16] hmm [15:16] git.kernel.org has the same thing. [15:16] arkiver: Thanks [15:16] git* I get [15:17] I wouldn't be surprised if the people over at kernel.org would be willing to add an exception for ia_archiver to robots.txt. [15:17] JAA: Yeah, http://web.archive.org/web/*/https://cdn.kernel.org/pub/linux/kernel/v3.x/* isn't turning up any results. [15:18] Yeah... Try this: https://web.archive.org/save/https://cdn.kernel.org/pub/linux/kernel/v4.x/ [15:18] "Page cannot be displayed due to robots.txt" [15:18] https://cdn.kernel.org/robots.txt [15:19] Exactly. [15:19] kernel.org has no such restriction [15:19] though since we've grabbed it via archivebot, it should appear, right? [15:19] Well, the Wayback Machine will still block access to it. [15:19] But the data will be there in the WARCs. [15:19] ah, right. [15:20] darn [15:20] And hopefully IA will some day finally remove that robots.txt handling. [15:20] nah, people who don't want their stuff online will go to IA nagging for their content to be removed [15:21] which is ironic I think [15:21] I would imagine that the wayback machine falls under fair use law in the US anyway [15:23] I'm about to fire off a email to the Linux Foundation. What do they need to add to allow ia_archiver? [15:25] User-agent: ia_archiver [15:25] Disallow: [15:26] I think they'd have to add that before the general disallow rule. [15:27] not after? [15:29] JAA: https://moz.com/learn/seo/robotstxt makes it seem like order doesn't matter [15:30] I'm not really sure to be honest. [15:31] ok well they should know, someone there setup a flag for the Google bot [15:32] The original spec doesn't really mention anything about it: http://www.robotstxt.org/orig.html [15:32] Email has been sent, I'll see what they respond with [15:32] Sweet [15:34] helpdesk@rt.linuxfoundation.org For all issues with Linux Foundation websites or systems, including questions about Linux.com email addresses. [15:34] JAA: I'm cleaning up my archivebot box from the acid grab, so things should start moving a bit better for the archivebot [15:34] By the way: https://archive.org/details/git-history-of-linux [15:35] sweet [15:35] I wonder how they did that without changing commit IDs [15:36] Considering that git-filter-branch was used according to the description, it probably did change the commit IDs. [15:37] yeah, it says something about git graft, though I'll have to read up more on it [15:37] Yeah, looks like those three parts are sort-of merged together without affecting the history. [15:38] "Graft points or grafts enable two otherwise different lines of development to be joined together. It works by letting users record fake ancestry information for commits. This way you can make git pretend the set of parents a commit has is different from what was recorded when the commit was created." https://git.wiki.kernel.org/index.php/GraftPoint [15:38] Never heard of it before. Very interesting. [15:54] whelp I quick fired the email to the wrong place, sent a new one to webmaster@kernel.org [15:54] someone did get back to me from Linux Foundation pointing me to that email [16:21] *** wp494 has quit IRC (Ping timeout: 633 seconds) [16:22] *** wp494 has joined #archiveteam-bs [16:23] *** svchfoo1 sets mode: +o wp494 [16:28] JAA: tyzoid https://i.imgur.com/EINMXfh.png this is their reply [16:29] DragonMon: No, we are referring to the tarballs on cdn.kernel.org [16:29] https://cdn.kernel.org/robots.txt [16:29] which are the release tarballs [16:29] and those are located at https://cdn.kernel.org/pub [16:30] and the kernel mirror denies all bots too (where www.kernel.org/pub redirects to) [16:30] https://mirrors.edge.kernel.org/robots.txt [16:37] tyzoid: ok I bounced another email "When Internet Archive goes to archive that website https://kernel.org/pub it gets redirected to https://mirrors.edge.kernel.org/pub which has a restriction https://mirrors.edge.kernel.org/robots.txt Can that be fixed so anything under the sub folder pub can be archived?" [16:38] Well, we don't necessarily want to mirror the entire software mirror [16:38] It's really the stuff under cdn.kernel.org which we're after [16:39] Well the they are saying that /pub is supposed to be restriction free [16:39] hmm [16:40] I should have been more clear to not restrict the source code tarballs [16:55] " DragonMon: if you're concerned about those release tarballs, we got 'em when we grabbed kernel.org" tyzoid: where would they be available then? [16:56] Soon, at an archive near you. ;-) [16:56] There's a delay of some hours to few days until the archives from ArchiveBot end up on IA. [16:57] Until the robots.txt is fixed, you'll have to access the files directly from the WARCs, not through the Wayback Machine. [16:57] so the collections and not as apart of the main archive.org. So the links will be broken [16:58] The links will be broken because of robots.txt, not because of where the archives are stored. [16:58] ArchiveBot WARCs do get ingested into the Wayback Machine, but robots.txt prevents the access for these specific URLs. [16:58] *** jschwart has joined #archiveteam-bs [16:59] wait, that seems somehow worse. I thought the files got chucked because of robot.txt So there's potentially tons of data archive.org has but cannot make readily easy to access? [17:00] Yep [17:00] oh damn [17:03] Better to have the data somewhere behind a (complete or partial) block than to not have it at all. But yeah, it's not optimal. [17:04] but for example https://web.archive.org/web/20170405222346/http://cdn.kernel.org:80/pub/linux/kernel/v4.x/ works? [17:04] only not able to save stuff through live wayback [17:04] Yeah, looks like there was no robots.txt in the past: https://web.archive.org/web/20170407001344/http://cdn.kernel.org/robots.txt [17:05] afaik IA doesn't care when robots.txt was created, it's about the latest one [17:05] Yeah, I was about to ask that. That's my experience as well. [17:06] IA recently changed something with robots.txt, not sure what exactly could be viewing only and not saving through live [17:06] (remember all the angry people) [17:06] so if you want it archived do it through archivebot :) [17:08] it could be a bug that https://web.archive.org/web/*/http://cdn.kernel.org:80/pub/linux/kernel/* doesn't list anything while https://web.archive.org/web/20170405222346/http://cdn.kernel.org:80/pub/linux/kernel/v4.x/ exists [17:08] will report that too [17:09] hmm is https://mirrors.edge.kernel.org/pub not equal to http://cdn.kernel.org:80/pub/linux/kernel/v4.x/ [17:10] I mean are the two pub folders the same content? [17:10] seems like it is [17:17] DragonMon: What I was saying is that https://www.kernel.org/pub redirects to mirrors.edge.kernel.org/pub [17:17] www.kernel.org/pub is the url they mentioned in their email as specifically allowing bots, which after the redirect does not [17:18] right ok, I did let them know. I haven't gotten a reply yet [17:23] *** schbirid has joined #archiveteam-bs [17:27] tyzoid: JAA arkiver https://i.imgur.com/776k3T0.png They are updating it [17:28] DragonMon: sweet! [17:31] So if all these archives are getting uploaded as WARC files, is it possible to de-duplicate? [17:32] I'm not sure how the archive handles duplicate content [17:32] match files that are identical from other archives and uploads to reduce overall data size [17:32] hmm [17:33] if not that's a crazy amount of data that got uploaded when a site had restrictive robots.txt to when it didn't [17:33] any site that did this [17:38] geez I hope archive.org does something for data duplication. Otherwise where are they getting the cash for this? [17:39] Donations mostly [17:39] Hard disk space is relatively cheap [17:40] *** Valentine has quit IRC (Quit: Addio, adieu, adios, aloha, arrivederci, auf Wiedersehen, au revoir, bye, bye-bye, cheerio, cheers, farewell, good) [17:40] the amount of money and time I spend for my measly 4 TB of personal data on 8TB total of drives... doesn't hold a candle to this [17:41] Economies of Scale work to their benefit here [17:42] They can order hard drives in bulk, saving money [17:42] and they have very efficient storage systems, which don't require a ton of power [17:42] i.e. 4u racks of hard drives [17:42] http://archive.org/web/petabox.php [17:44] I wonder what the failure rate is [17:44] quite low, in general [17:44] how often they need to replace a drive [17:44] I'd guess it's similar to backblaze stats [17:45] https://www.backblaze.com/blog/hard-drive-failure-rates-q1-2017/ [17:45] Has anyone tried to take them down? [17:45] IA? I'm sure [17:45] But being a nonprofit library means you have allies [17:46] Hackers sure... but I'm talking about physical attacks [17:46] oh, idk [17:46] doubt it, though [17:47] DragonMon: I'd expect about ~3-4% failure rate per year, estimating on the high side [17:47] of all the crazy crap going on in the world, libraries and projects like archive.org seem to be constants [17:47] And I would probably expect to refresh the physical servers about once every 6-8 years [17:47] things you can rely on [17:48] in recent history that is [17:48] and with the IA.BAK, we can hope that it'll continue [17:48] you never really hear about libraries getting attacked [17:48] *** SimpBrain has quit IRC (Read error: Operation timed out) [17:49] *** SimpBrain has joined #archiveteam-bs [17:49] DragonMon: IIRC the library of Alexandria was intentionally burned down while at war. [17:50] it's why I said 'recent history [17:50] ' [17:50] *** Valentine has joined #archiveteam-bs [17:50] lol [17:51] tyzoid: fixed [17:51] https://mirrors.edge.kernel.org/robots.txt [17:51] should I run another archive request? [17:51] DragonMon: Mosul public library by Isis in 2015, Libraries in Anbar Province by Isis in 2014, Mosul private libraries by Isis in 2014, National Archives of Bosnia and Herzegovina by rioters in 2014... [17:52] need I go on? [17:52] https://en.wikipedia.org/wiki/List_of_destroyed_libraries [17:52] DragonMon: The previous archivebot grab should have gotten most things. You can try, if you want. [17:53] tyzoid: I guess I missed [17:53] ams.edge.kernel.org still has User-Agent: * Disallow: / [17:53] lindalap: I don't think that should be a problem [17:54] tyzoid: wouldn't the last archive done include the old robot.txt? [17:59] tyzoid: I tried manually saving a link using archive.org itself and it's still complaining about robots.txt [17:59] IA does no deduplication [17:59] So if all these archives are getting uploaded as WARC files, is it possible to de-duplicate? [18:00] well no deduplication of WARCs in items [18:00] arkiver: so say website-fun.com/this.png was IDENTICAL to twitter.com/this.png would it still get duplicated? [18:00] yes [18:01] i'm against deduplicating it right now too [18:01] yea it might cause some confusion if something gets corrupted [18:01] note: not necessarily IA opinion [18:01] ArchiveBot should deduplicate within one job, but that's broken at the moment. [18:01] Not across jobs though. [18:01] so WARCs currently hash the payload using SHA1 [18:02] which can cause collision with the earlier demonstrated attack [18:02] causing stuff to be 'deduplicated'/deleted from the wayback machine if done succesful in certain circumstances [18:03] that is different WARC payloads with the same SHA1 [18:03] arkiver: Didn't google have a patch for sha1 that returned a different hash if a bad input is detected? [18:03] no idea [18:03] That sounds like an awful idea. [18:03] but that is just getting trying to get rid of symptoms [18:04] yeah, sha3 ftw [18:04] Or SHA-2. [18:05] will archive.org eventually 'sync' and unblock data once it processes the new robots.txt? Because it's still giving me an error about robots.txt after the change [18:05] IIRC yes [18:05] Yes, that's what should happen. [18:05] problem is that the links go to cdn.kernel.org [18:05] alright cool [18:05] oh [18:05] so we're still at square one [18:05] erm [18:06] square one of what [18:06] arkiver: Go to kernel.org, and hover over any of the download links [18:06] yeah [18:06] it'll show that the link goes to cdn.kernel.org/pub [18:06] yeah [18:06] or git.kernel.org/pub [18:06] I thought they're going to update the cdn.kernel.org robots.txt? [18:06] which are still blocked [18:06] JAA: They changed mirrors.edge.kernel.org/robots.txt [18:06] Hmm, why not the CDN? [18:07] I think it was already demonstrated that they are downloadable from the wayback machine if saved? [18:07] just save them through archivebot if you want them to be saved [18:07] JAA: From what I can tell, the CDN is generated from cgit on the web [18:07] which they claim to put strain on their systems to allow robots [18:07] tyzoid: https://i.imgur.com/EINMXfh.png was only about git.kernel.org. [18:07] so I'd say not square one? answer would be archivebot and downloading from wayback seems to work, at least for that page that we checked [18:08] https://web.archive.org/web/20170405222346/http://cdn.kernel.org:80/pub/linux/kernel/v4.x/ [18:08] JAA: cdn.kernel.org looks to be the same as git.kernel.org [18:09] arkiver: Links are still broken on kernel.org homepage, though [18:09] Give it time... [18:09] We'll see [18:09] should I email about cdn.kernel.org? [18:09] I'm not convinced it'll be fixed [18:09] but we can wait [18:09] it's not like kernel.org is going anywhere soon [18:10] " [18:10] I'm seeing a similar issue with https://cdn.kernel.org/robots.txt if anything gets redirected there Internet Archive will still have issues grabbing from https://cdn.kernel.org/pub OR https://kernel.org/pub" [18:11] I know kernel.org isn't going anywhere and I'd be surprised if their team doesn't have backups of backups buried under backups stuck in time capsules of backups somewhere. But openwrt.org recently had major corruption of their site and forums due to hardware failure [18:11] mostly forums iirc [18:12] https://forum.openwrt.org/ -- "The OpenWrt forum is currently offline due to a hardware problem on the hosting machine." [18:18] right [18:33] *** fie has quit IRC (Read error: Operation timed out) [18:45] *** fie has joined #archiveteam-bs [19:15] arkiver: Are you aware of any efforts to replace SHA-1 in WARCs? I guess since the specification allows for any algorithm to be used, it's simply a matter of coordinating with the different authors of WARC-related software? [19:15] I'm not aware of any efforts like that [19:19] *** DragonMon has quit IRC (Read error: Connection reset by peer) [19:21] Hmm, looks like the spec doesn't allow for multiple headers of the same type (except for WARC-Concurrent-To), so having multiple digests for the same record for backwards compatibility won't be possible (unless the spec gets modified). [19:22] Oh well, I think there are more pressing issues with WARC, like adding a way to store SSL certificates. [19:22] a standardised way* [19:23] everything is decided here https://github.com/iipc/warc-specifications [19:23] but it's slow and taking long and all that [19:23] Yeah [19:24] however I think we are allowed to add our own random fields too, and we can store SSL stuff as for example resource records [19:24] The response is also frequently "implementation first please". [19:24] i think we can do that? [19:26] For sure. [19:26] we can find a good way to store SSL and other stuff (DNS?) and make an issue there. If the responses are not too negative I think we can just start using it. Then there is more of a reason for them to accept it if it's already in billions of records. [19:26] Of course only if the responses are not totally negative towards the idea [19:26] You could double up on it arkiver [19:27] have SHA1 header and then a SHA256 header [19:27] Nope, the spec doesn't allow for that. [19:27] "WARC named fields of the same type shall not be repeated in the same WARC record" [19:27] what about using a new header field [19:27] I understand not having more then one [19:28] but as metadata [19:28] That would be possible, but ugly. [19:28] best way to ensure compatibility for now [19:28] JAA: what do you think of that idea? [19:28] not sure if it's a good approach [19:31] JAA: one thing I have been thinking about a lot and what I really really want in there is torrents support [19:31] and/or magnets [19:31] especially with webtorrents that are sometimes used to load stuff likes images and videos [19:32] JAA: We could start working with archiveteam on drafting ideas for archiving torrents and the stuff that is downloaded with them [19:32] wait, im looking at the spec [19:32] WARC-Block-Digest: sha1:UZY6ND6CCHXETFVJD2MSS7ZENMWF7KQ2 [19:32] WARC-Payload-Digest: sha1:CCHXETFVJD2MUZY6ND6SS7ZENMWF7KQ2 [19:32] so it has a sha1 prefix [19:33] arkiver: I like the idea of adding SSL certificates in resource records or similar. DNS would fit into response records; Heritrix already does that actually. [19:34] *** DragonMon has joined #archiveteam-bs [19:34] JAA: hmm didn't know that about heritrix, pretty nice [19:34] jrwr: That's right. Unfortunately, whatever you do, it won't be backwards compatible. [19:34] Well, except your solution but that's really ugly in my opinion. [19:34] JAA / arkiver: I'm in favor of it, if we can find a way to be able to trace the content back up to a trusted certificate [19:35] IIRC, that'd require storing the session secret in the warc file [19:35] tyzoid: Even that won't help since the content is encrypted symmetrically. [19:35] aaand how about the webtorrents :) [19:35] JAA: The content is symmetrically encrypted with a key that's agreed upon (usually by) Diffie Helmen [19:36] hellman* [19:36] Which is asymmetric [19:36] tyzoid: Yeah, but the client (which writes the WARC) could modify the content at will without affecting the key. [19:36] tyzoid: JAA ok so there was some issue with configuration, they are updating https://cdn.kernel.org/robots.txt now -- https://i.imgur.com/CswwPlL.png [19:37] Right. I'd need to look at the protocol more in depth, but I believe there's a way to be able to store enough data to verify the message [19:37] tyzoid: Yeah, *if* the right cipher is used it might work. [19:37] JAA: Luckily, the client controls the cipher used [19:38] To a degree, yeah. But it still needs to be compatible enough to grab everything. [19:38] JAA: As long as we've got a wide enough range of supported ciphers, the server will pick one they prefer. If that doesn't work, we can just fall back to what we've got now [19:39] Yeah, true. [19:39] DragonMon: Excellent, thanks. [19:39] Yes, glad that's sorted ^ [20:40] *** schbirid has quit IRC (Quit: Leaving) [21:15] *** plue has quit IRC (Quit: leaving) [21:19] *** plue has joined #archiveteam-bs [22:11] *** jschwart has quit IRC (Quit: Konversation terminated!) [22:12] *** BlueMax has joined #archiveteam-bs [23:39] *** Despatche has joined #archiveteam-bs