[01:52] uploading video #200 [01:52] yay [01:58] excellent [02:07] holy crap- this must be the finest job posting ever: http://blogs.valvesoftware.com/abrash/ [02:08] just got nfs access to the imagedump server for wikimedia foundation [02:08] those will be going up soon on a.o [02:08] congrats! [02:09] thanks :) [02:09] Holy carp, is THAT where Abrash wound up? [02:10] after 14 years he came back there it looks like [02:12] 204.9.55.82:/z/public/pub/wikimedia/dumps 157T 31T 126T 20% /mnt/dumps [02:12] 204.9.55.82:/z/public/pub/wikimedia/images 143T 16T 126T 12% /mnt/images [02:12] wheeee [02:18] underscor: excellent! [02:19] i've got hundreds of photos there [02:19] :) [02:19] i was hoping they'd outlive wikimedia, and it seems it will [02:19] next step it to write ingestion logic to get it all into archive.org [02:19] s/it/is/ [02:19] (hundrds on the commons, that is) [02:19] underscor: good luck, and well done on scoring those [02:20] thanks [02:20] Wow, that's...a big array. [02:21] Two of them. [02:21] yeah [02:21] the box has something like 480TB on it [02:22] Where box == rack, I'd imagine. [02:23] it's all on one "machine" [02:23] connected over fibrechannel disk enclosures [02:24] So something like a Ceph cluster? Okay, makes sense. [02:59] Well, my archives con today reinforced how awesome AT is [02:59] So, go you guys o/ [03:01] shaqfu: marac? [03:01] mistym: ...how'd you know? [03:02] shaqfu: Seen a bunch of people I follow twittering it up today. Wasn't there myself. [03:02] mistym: Yep, was there today/tomorrow [03:02] mistym: Didn't know we had another traditional archivist in the room [03:04] shaqfu: Yep! For a given value of "traditional" anyway, but yeah, did my masters in a traditional archives program 'n all. [03:04] mistym: Spiffy; just got mine [03:05] Congrats! [03:05] Thanks :) [03:05] Are you with a place now? [03:05] Yeah, I work at a museum in Manitoba. [03:06] Gotcha; bit distant for MARAC, then [03:07] Yeah, not exactly in the area. [03:07] Is there a regional one for central Canada? [03:08] Not in my province, at least. Alberta's archivists are pretty active though. [03:08] Oh, wow; that's a hike [03:11] The digital object seminar was cool; the job one, harrowing; the DH one, dull [03:12] I saw anarchivist tweeting about the job one. It sounded brutal. [03:12] Yep [03:13] Lots of "the system is totally fucking broken and we can't fix it" [03:14] And more of the usual student/professional divide, but nobody discussed it :( [03:14] :( [03:15] For those of us stuck between them, we're SOL [03:18] Hm, is there a url for the wayback machine to load the latest version of a page, rather than a list or specific revision? [03:24] But yeah, after listening to a bunch of "real" archivist talk digital records for a day, I appreciate AT that much more [03:26] shaqfu: I know, right? It's unfortunate that it's hard to have a constructive discussion in that environment. [03:27] mistym: Yeah, lots of "we need to be doing something!" and nothing getting done [03:27] Yeah... [03:27] Aah, I was just about to ask about that... [03:28] shaqfu: Giving a talk at this year's ACA that I'm hoping to balance with a little "we can do things! let's get things done!" [03:28] mistym: C being Canadian or Certified? [03:28] Canadian. [03:28] ("Canuck") [03:29] Wyatt|Wor: re: wayback machine? [03:29] If a bunch of malcontents online can move mountains, imagine how much big institutions could do... [03:29] mistym: Re: "listening to a bunch of "real" archivist talk digital records for a day" [03:29] Wyatt|Wor: Ahh. [03:29] Whoops, those quotes escaped. [03:30] Need to be more careful escaping. [03:31] shaqfu: I dunno, the longer I'm in big institutions, the more I worry institutional glacial workflows can't be made to work at the speed that's useful. [03:31] that archive team was necessary shows that there's a real problem with "real" archivists [03:31] I'm being overly pessimistic there, but there is major reorientation that institutions, in the big-institution sense, are going to have to do. [03:31] as much as many of them do great work, a lot of them have a lot of great ideas about digital preservation and very little wget [03:32] winr4r: I think there are more problems than simply archivists. [03:32] Wyatt|Wor: oh of course [03:32] i don't doubt that [03:32] mistym: Yeah, it's hard to respond to "you have 30 days before we delete everything" at the speed of bureaucracy [03:32] Mhm... [03:33] yes [03:33] It's also hard to change the mentality of people who put their data up without a second thought. [03:33] Yeah :( [03:33] And harder still to change the businesses that will bean-count something into oblivion without so much as a half-hearted apology. [03:33] And yeah, there needs to be a serious realignment if we're going to realistically deal with these records [03:34] Christ Almighty, Geocities alone is bigger than most university library systems [03:34] Good fucking luck doing it the old-fashioned way; I'll see you at the heat death of the universe [03:34] And yes, the metadata problem is monstrous. [03:34] Wyatt|Wor: It's solvable - you can work miracles with machine language processing [03:35] Which can at least deal with text stuff [03:35] give it a decade or so [03:35] shaqfu: Haha, you got me. I was just thinking about how to hack away at it with NLP. [03:35] Wyatt|Wor: One panel today was about using topic modeling on newspapers; I'm sure, given time, it'll apply to messier collections [03:36] But yeah, get a lot of processing power together, point it at Geocities, and you'll have something at least usable [03:36] maybe the only way to handle the new workflows is to have a totally separate group inside that's connected in name only, so they can respond in the timeframes required [03:37] shaqfu: I see Yahoo as being a pretty good analogue. Not data deletion Yahoo, but the old Yahoo web directory. [03:37] Ah, yeah [03:37] I've heard (unconfirmed) that they were among the biggest employers of library-school graduates at one point in time! [03:38] Yeah, like DMOZ, except people used it [03:38] Exactly. [03:38] There was a point where people thought of the internet as a thing you could index by hand, with meticulous metadata. [03:38] Then the dot-com boom came, the internet *exploded*, and that was never possible again. [03:38] Yep [03:39] mistym: i still think there's a place for it [03:39] winr4r: For hand curation? [03:39] shaqfu: yes [03:39] actually we have it already: it's called twitter [03:39] winr4r: Possible, but things move too fast for that [03:39] you just distribute the task [03:40] winr4r: For subject-specific stuff, etc. What I'm saying is that there will never again be a time where all the Internet that's fit to print is hand-curated to someone's professional standards. [03:40] winr4r: That requires an established network [03:40] shaqfu: yes [03:40] I think a better example is any of the social bookmarking sites (or any bookmarking site that has bookmarks open to the public) [03:40] like delicious and pinboard [03:41] dashcloud: Not enough. There's simply too much data to rely on that. [03:41] mistym: you might be right [03:41] on the other hand, is there actually more good stuff on the internet than there was in 1999? :P [03:41] hell yes [03:42] in quantity yes, percentage wise maybe maybenot [03:42] HiiiiIiIiiIIiiiI [03:42] I saw "Detention" [03:42] you must see detention. [03:43] Detention? [03:43] But I've done nothing wrong! [03:43] hi jason [03:43] Movie about chewing gum in high school? [03:43] Evening to you. [03:45] SketchCow: Has there been talk at IA about using NLP to handle metadata for these McLargeHuge collections? [03:47] What does NLP = ? [03:48] Quick search shows Neuro Linguistic Programming, but that doesn't seem right. [03:48] That's probably it. [03:48] natural language processing [03:48] If you're trying to data mine metadata for relevant info to humans [03:48] winr4r: Thanks [03:51] Letting machines figure out word associations, more or less [03:51] Interesting. [03:52] I'm reading the The Stanford Natural Language Processing Group web page. Who knew? Well I guess you did. :) [03:53] shaqfu: Absolutely no [03:54] SketchCow: Really? Admittedly, I'm surprised [03:55] Seems like the only reasonable solution - barring some miracle, I don't see there being enough humans to mark everything up [03:57] Don't be the latest in a hundred people I've dealt with surprised that archive.org doesn't have much manpower. [03:58] SketchCow: I'm not surprised IA has minimal staff; I knew that already :P I'm surprise there's been no talk about letting machines do the heavy markup lifting [03:59] It's kind of easy to assume that archive.org is some all-powerful automoton. Then you stand in a room WITH THE INTERNET and suddenly you realize that it's not actually powered by elder gods or smth. [03:59] (even if it *is* in a church) [04:01] Again [04:01] THERE'S NOBODY TO TALK [04:01] i'd say let someone in 20 or 30 years deal with it when they're widely recognised for being as important as they are [04:01] SketchCow: Gotcha this time [04:01] you could worry about NLP now to find all those cat photos, or you could 'grep -ri "cat\.*photo" /geocities' a thousand times as fast in 20 years' time [04:02] winr4r: For the amount of processing power it'd take, and how NLP isn't really at the point you'd need yet, yeah, may as well wait [04:03] I don't think there's only one "right" approach. [04:03] There rarely is [04:03] Especially since the data is hierarchical that actually could help quite a bit. [04:04] Well, up to a point. [04:05] (I'm not familiar enough with the data set to know how what proportion of neighbourhoods are just numbered with random stuff shoved in) [04:05] Didn't they stop that system after a point? [04:05] shaqfu: yes, after around 1999 i believe [04:06] So it's hard to regard that hierarchy for any serious use, unless you're limiting your work to 199x-1999 [04:06] One harpoon. [04:06] Hm? [04:07] Hooray, scraping script running. Hopefully will be successful! [04:07] mistym: what are you scraping? [04:07] winr4r: digiplay.info [04:07] mistym: excellent [04:08] Even though the data is kind of messy, it's not too much work to extract it into structured json. [04:08] This is also why I keep bringing in assholes from outside to dump open-source solutions and leverage archive.org against it [04:09] There's no dev space inside the company [04:09] leveraged synergies [04:09] wtf [04:09] SketchCow is talking about leveraged synergies [04:09] Hunh; I knew it ran lean, but didn't expect it to run *that* lean [04:10] SketchCow Clicker? [04:10] The press says things like 200-300 employees [04:10] But the vast vast vast majority of those people are scanners. Book scanners. [04:10] Isn't it something like 20-30 core? [04:10] I'd say, maybe, MAYBE, my observaton is 20-30. [04:10] Yes, 20, 30. [04:11] Now, work that out. [04:11] We have 5 people overseeing the book scanning centers. [04:11] Boom, now we're 20-25% down [04:11] etc [04:12] I'm like hiring six new employees in terms of stuff and publicity and the rest [04:12] But I can only do things that are being brought in, there's no way to make those poor devs do MORE work [04:12] 15 coder-librians is not enough [04:12] And there we are. [04:12] And those people are also jointly responsible for running all the servers and such? [04:12] So if we do some sort of NLP smart tagging smartiness, great. Get on it. [04:12] Free tour. [04:12] Yes, there's a team of 5-10 dev/admin/network people [04:13] OH LOOK AT ALL YOUR EYES GO WIDE [04:13] Anyway, so yeah, get on it. [04:13] and there's got to be like six billion servers there [04:13] I'll just use my universal access to ensure you get stuff to help you. [04:14] you should get a rifle, some tranq darts, then go hang out outside one of google's datacenters [04:14] Pity 'bout the 5-10 years math ed it'd take to do it; it'd be a badass project [04:14] Go rape a gaduate program is my suggestion [04:15] Anyway, unrelated, I need to go to bed now. [04:15] Yeah, the perfect admin abduction is hard to pull off. [04:15] Good night. [04:15] night jason [04:15] G'nite [04:15] Let's keep making amazing shit [04:17] Ooo, one of batcave's two remaining mounted drive sets has been emptied out [04:17] We're now down to one. 9gb. [04:17] in any case, i think there's a risk of over-complicating the "saving shit" strategy (which provably works very well) and turning it into a discussion about "how do we make sure that every single thing is categorised as well as books are in a library" and thereby getting very little done [04:17] SketchCow: s/g/t/ ? [04:17] i can't imagine you having only 9gb of *anything* [04:17] ha ha [04:18] winr4r: Yep, that's what happened on this end; it turned into an issue of description [04:18] Did I write 9gb? [04:18] I DO need a rest [04:18] 9tb [04:18] Which, really, people only care 'bout good-enough [04:18] SketchCow: did you actually sleep at all last night? [04:18] Barring special cases - obviously shit like presidential letters need Awesome [04:19] meh, president is just another sack of meat [04:19] shaqfu: good enough and actually existing beats immaculately described archives that do not [04:19] winr4r: You got it [04:20] "less process more product" etc [04:20] mistym: wrought grand - okay item-level description [04:21] ~500 pages of 5000. This may take awhile. [04:24] good luck [04:30] So as a baseline, any thoughts on what metadata should be given priority? Dublin Core and a mostly-flat ontology of descriptive tags? [04:34] http://archive.org/details/stage6 [04:35] He even archives in his sleep. ;) [04:36] zzzzzzreclassifyfuckdublincorzzzzzzmmzzzzz [04:39] haha [04:42] That's fine. I'm not a particularly huge fan of DCMI, even though I live a stone's throw from Dublin. [04:42] mmm [04:42] i has a collection [04:43] What has you a collection of? [04:43] i'm uploading the stage6 items [04:43] Coderjoe: how did you get them? [04:44] Ahh. [04:44] winr4r: I downloaded them between the closing announcement and the shutdown [04:44] with metadata and everythign [04:44] http://wegetsignal.org/stage6/ [04:44] will probably be a little slow [04:45] 25 terabytes? [04:45] <3 [04:45] winr4r: I don't have 25 TB of videos [04:45] only 290-ish GB [04:45] oh, nm, saw the percentage [04:45] but still, good work :) [04:46] Good one. [04:46] Is that 25TB before or after deriving? [04:46] that was the projected size of what was up on the stage6 servers [04:47] Ah... :/ [04:48] on an unrelated note, is there a big list of fortunecity sites that you guys have been using? [04:48] and this was just me with three network connections (home, work, and a server in california) [04:48] if i'm well within my bandwidth cap towards the end of the month, i will set my screenshot bot loose again [04:48] winr4r: I think it came from google results [04:49] http://archive.org/details/geocities-screengrabs-collection in case you didn't know [04:49] 4000+ from geocities [04:50] winr4r: Does it run as a normal user? I give you an account on my VPS, if you'd like. [04:51] Wyatt|Wor: yes, though it does need an xvfb to run on [04:51] and a bunch of dependencies that aren't normally on a server [04:51] That doesn't necessarily mean it's not doable. [04:52] (Though I've never messed with xvfb on a headless machine) [04:52] Wyatt|Wor: me neither [04:55] xvncserver wouldn't work? [04:55] (you should be able to take a shot of the desktop or the like, I would think) [04:55] Coderjoe: i'd expect it would [04:55] i just know for sure that Xvfb does [04:56] i mean, yes, it isn't a vfb, but still [04:56] hmm. 6 items i need to redo [04:57] i'm sure it will increase [04:57] Okay, looks like xvfb will work on a headless box. That's what Google says. [04:58] btw, that "videos listed" stat is just the videos I had pulled into my database with my importer. the "total video count" at the bottom is my estimated total video count that stage6 hosted [04:58] Wyatt|Wor: splendid! [06:57] underscor, what dump server? [06:59] ah, your.org [09:15] Curious, since this thing is _still_ grepping that file, the webdav-feed.json and .xml...what role do they serve, exactly? [09:15] how big is the file? [09:16] and how can it take that long to grep anything? [09:16] fgrep is much faster, for fixed strings [09:17] yeah but i grepped a 1.9gb file in seconds, earlier today [09:17] it was probably all sitting in ram [09:17] (getting a list of fortunecity sites from the ODP) [09:17] chronomex: nope, fresh from the disk [09:17] hm, ok [09:19] It's the json is 35MB. The incantation is grep http://gallery.me.com/[^"<]+ data/p/pe/per/pertormod1/gallery.me.com/webdav-feed.json and it's accumulated 3766 CPU _Minutes_ [09:19] okay, i would call that a bug [09:20] indeed [09:20] Run it interactively and see what it outputs, if anything? [09:20] use [^"<]+? and grep -E [09:20] Even assuming the worst case of grep's iconv locale performance, I'm inclined to agree. [09:20] Sorry, there's an -oE in there I missed [09:21] (It's the seesaw-s3.sh) [09:21] just timed a regex again on a copy of said 19gb file, 19.4 seconds [09:21] winr4r: 1.9 or 19? [09:21] 1.9gb* [09:21] +? [09:21] ah [09:21] +? [09:21] so yeah 3766 minutes for a 35mb file is a LITTLE excessive [09:22] Wyatt|Wor: hmmmm. I would try some cut | grep(not -E) action. [09:22] That comes out to about 162 bytes per second (and dropping, if it's still going) [09:22] drooping [09:23] That's my perspective too. I think the emulated ARM processor that booted Linux on the 8-bit MC had a better data rate about a thousand times that. [09:25] winr4r: I see you're asking about a list of FortuneCity sites. I can send you the list from which we've been downloading, if that helps. [09:25] Hey, it's faster than some 600 baud modems, according to Wikipedia. [09:25] some? [09:25] alard: i'd appreciate it, we can compare notes too [09:25] https://en.wikipedia.org/wiki/List_of_device_bandwidths#Modems_.E2.80.93_narrow_and_broadband [09:26] alard: i grabbed a list from ODP (hence greeping a 1.9gb file), did you guys try that? [09:26] There are two 600 baud ones that're 1.2 kbit/s and one that's 2.4 kbit/s. [09:26] ah [09:26] winr4r: What's ODP? [09:26] alard: open directory project [09:26] Ah, I see. No, I just googled. [09:26] okay, one sec [09:27] http://dl.dropbox.com/u/57276499/sitelist.txt [09:28] is what i got from ODP [09:28] I actually don't understand this regex, even. o is --only-matching -E is extended regex... how does this work? [^"<]+ [09:29] I don't think you should need -E for that [09:29] win4r: Okay, got it. I'm currently making my list. [09:29] Wyatt|Wor: The regex matches anything until " or < [09:29] I believe that's matching urls in the webdav file. [09:30] So it will match from http:// until the tag ends. [09:30] alard: Ah, I thought the caret was an anchor to the beginning? [09:31] Wyatt|Wor: not within []s [09:31] Between [] it's a negation. So 'anything but " and < ' [09:31] alard: thanks :) [09:31] ooooooh, I see. Hmm, need to put more skill points in RegEx. And the +? [09:31] one or more instances of the preceding object [09:32] * means zero or more. [09:32] er, matching element, which in this case is the whole [] expression [09:32] Ah, so [] create a single semantic unit. I see. [09:32] indeed [09:33] it matches exactly one character [09:33] winr4r: http://db.tt/PjVwK1A2 (a 3.7MB .txt.bzip2) [09:34] is that the file we're working on? [09:34] No, that's the list of all fortunecity sites. [09:34] alard: thanks! [09:35] winr4r: You'll have to expand the streets yourself, we've basically archived anything from number 0 to 2600. [09:37] alard: so com/campus/athena = campus.fortunecity.com/athena/ ? [09:41] Hm, so it's definitely finding things, though it seems awfully slow... [09:42] paste the entire command line [09:43] grep -oE 'http://gallery.me.com/[^"<]+' data/p/pe/per/pertormod1/gallery.me.com/webdav-feed.json # Pretty much verbatim from seesaw-s3.sh [09:44] Ah, I think I've got the problem. Who do I bug about a patch? [09:45] a bug in grep? [09:45] Well, yes, to an extent. But it's a bug I think we can safely work around. export LANG=C And it's about three orders of magnitude faster [09:46] I could have sworn the iconv bug was fixed though. :/ [09:46] where is the file to parse? i want to make some grep tests [09:46] emijrp: Let me stick it somewhere. [09:48] Come to think of it , DCC would have been faster... [09:48] radiusic.com/bigfeet.json [09:48] "d" and "t" are totally right next to each other. [09:50] downlaiding [09:52] Okay yeah, it hit me because my grep is old. It's apparently fixed in grep 2.9 [09:53] (Didn't realise I was still using grep 2.5.4) [09:53] what do you want, the entire url o just the domain + username?= [09:53] grep 2.old [09:53] The problem is, in this case, most distros in production are probably using old grep. [09:54] CentOS 6 has grep 2.6 [09:54] i'm on 2.5.4 too [09:55] but piping it to a file, it takes a few seconds [09:56] 8 seconds, to be precise [09:56] time grep -oE 'http://gallery.me.com/[^"<]+' bigfeet.json > what [09:56] is what i am using [09:57] emijrp: The problem isn't that it doesn't work. The problem is that when you're using many versions of grep in the wild with LANG=en_US.utf8 (or any unicode, locale for that matter), it's fantastically slow. [09:59] unicode comparisons are always very slow [09:59] The good thing is, we can patch our scripts by explicitly setting LANG=C and LC_CTYPE=C and that should be safe. [10:00] i am en_GB.utf-8 and that grep still takes seconds rather than hours [10:00] (Or just unset LC_CTYPE) [10:01] In what format is the file you're grepping save in? Wouldn't this matter? [10:01] I suppose grep should determine what charset it's being used, but will do so only from the headers... [10:02] Nemo_bis: It's just a JSON file from mobileme [10:03] winr4r: "so com/campus/athena = campus.fortunecity.com/athena/ ?" Yes, or www.fortunecity.com/campus/athena/. (The subdomain approach doesn't work with co.uk/it/es/se, I think.) [10:03] alard: gotcha [10:48] netsplit [11:08] Okay, updated grep and things are much speedier. I'll try to take a look at the memac scripts when I get home and figure out where to add that env. [11:09] it's weird though [11:10] that i can be running the same version also with a UTF-8 LANG and do in nineteen seconds what your grep didn't finish in hours [11:10] * winr4r isn't exactly on a speed-demon computer [11:14] winr4r: What distro/version? [11:15] Wyatt|Wor: ubuntu 10.04 [11:15] Distro-specific patches will do that. Yeah, Debian patched it a while back. [11:15] ah :) [11:15] Gentoo just stabled a newer version instead. [11:15] that explains that [11:16] (But this is my work computer, so I don't exactly bother updating often) [11:17] * winr4r nods [11:17] 1507 screenshots ;D [11:18] Ooh, going pretty fast. [11:27] 06:25:13 up 13:46, 6 users, load average: 54.70, 54.97, 58.34 [11:27] hmm i think i started too many threads [11:27] What, that's it? [11:27] hahaha [11:27] yeah that's it [11:28] ;) [11:30] i got a box from softlayer and its not going over 100mbit :( [11:31] Time to go home. Later. [11:32] bye Wyatt|Wor! [13:33] See? I sleep like everyone else. Here I am, back up again. [13:34] lies [13:40] haha [13:41] hey cow, i'll have fortunecity screenshots for you soon [13:41] i'll email you when i'm done, it's not urgent [13:44] Sounds fun [13:49] Wyatt|Wor is letting me use his VPS for it [13:49] i'm at nearly 2000 now [13:49] how are you? :) [13:49] Just blew another bulk of mobileme off batcave. [13:49] The machine is now down to 8.8tb of data. [13:50] Which is good, it's down from rough 28tb [13:52] Mostly, I'm stunned, I'm finding additional pieces of friendster [13:52] And everything else. [13:52] Also, our Berlios grab [13:53] that's the archive team equivalent of finding loose change in your sofa? [13:53] Yeah [13:54] What about splinder? [13:54] SketchCow, I think chronomex needed a place where to upload his last pieces of Splinder. [13:55] Next, I need to start shoving splinder into archive.org proper. [14:01] Hey there, I'm James. I'm from Australia. [14:01] I have the site bookmarked again, and will read a few files when I get the time. [14:01] I'm nearly seventeen, and I remember coming across textfiles at 12 or 13.. [14:01] You are fucking straight up, and I respect it.. [14:01] Thanks for doing what you do! [14:02] :) [14:02] doesn't that sort of thing just make your day? [14:04] Well, I get a lot of them. [14:04] But I do appreciate them. [14:04] mhm [14:29] 2.2T mobileme-03 [14:29] 2.8G mobileme-05 [14:29] 413G mobileme-06 [14:29] 7.5G mobileme-04 [14:29] See just kind of lying around there [14:30] 2.2 terabytes sounds like a small figure then you see "413G" and then it's like "oh, that is actually a big number" [14:35] haha [14:35] are there any other projects apart from mobileme i can be helping with? i have bandwidth to spare [14:39] Check the wiki? [14:40] I don't actually know offhand which need bandwidth OTHER than mobileme [14:41] I'm about to dump a pile of Polish shareware CDs onto the cdbbscollection. [14:42] yeah i looked, theres not really anything else to do from what i can see :( [14:43] Can't you simply use more bandwidth on mobileme, or is mobileme at its limit or something? [14:44] Mobileme is a cancer eating all our attention - after I finish with batcave's decomission I will start regarding other things we can do. [14:45] i cant seem to get more than about 100mbit out of mobileme from my box at softlayer even though its on a gige connection [14:45] and im running a lot of threads, running more just bogs the system down and doesnt get anything downloading faster [14:50] Someone has sent me 4gb (or thereabouts) of mid 1990s Spanish demoscene stuff. [14:52] SketchCow: that message from james is really cool [14:53] Yeah, and he's still a young nubile 17 year old and not some busted old mare like you [14:53] * SketchCow turns undersco2 [14:53] Come back when you've earned three fiddy [14:53] i cant resolve textfiels.com :/ [14:53] textfiles.com rather [14:53] Record expires on 07-Oct-2021. [14:53] SketchCow: <3 [14:53] hahaha [14:53] It ain't that! [14:53] 2021! [14:53] same here [14:53] Bitches! [14:54] ;; ANSWER SECTION: [14:54] textfiles.com. 3600 IN A 208.86.224.90 [14:54] fine here [14:54] Well, I'm ON textfiles.com, so it's not the machine. [14:54] cant get it from my box in australia or here in budapest [14:54] Likely, someone is assfucking the apache. [14:54] One moment. [14:54] [14:54] The connection was reset [14:54] [14:54] [14:54] The connection to the server was reset while the page was loading. [14:54] yep same as undersco2 [14:54] yep [14:54] oh fuck [14:55] that's a lot of returns [14:55] sorry [14:55] Ah, here we are. [14:55] Someone has 480 simultaneous connections to the machine. [14:55] That might be a factor. [14:55] jesus [14:55] anyone know a way/system for a redundant multi node filesystem i can run between many computers? [14:55] w./ linux [14:55] ha ha. [14:56] Someone's about to meet my old friend mister soft firewall [14:56] hahaha [14:56] oli: ceph [14:56] can't you put in a rewrite rule for his IP so he downloads 480 goatses every time? [14:56] ^ [14:56] hahahahaha [14:57] undersco2: thx will look into it [14:59] tcp4 0 33078 208.86.224.90.80 189.19.142.212.42364 LAST_ACK [14:59] tcp4 0 33078 208.86.224.90.80 189.19.142.212.42365 LAST_ACK [14:59] tcp4 0 33078 208.86.224.90.80 189.19.142.212.42591 LAST_ACK [14:59] tcp4 0 33080 208.86.224.90.80 189.19.142.212.42506 LAST_ACK [14:59] tcp4 0 33080 208.86.224.90.80 189.19.142.212.42566 LAST_ACK [14:59] tcp4 0 33080 208.86.224.90.80 189.19.142.212.42328 LAST_ACK [14:59] tcp4 0 33079 208.86.224.90.80 189.19.142.212.42257 LAST_ACK [14:59] tcp4 0 33078 208.86.224.90.80 189.19.142.212.42238 LAST_ACK [14:59] tcp4 0 33078 208.86.224.90.80 189.19.142.212.42239 LAST_ACK [14:59] tcp4 0 33078 208.86.224.90.80 189.19.142.212.42129 LAST_ACK [14:59] tcp4 0 33080 208.86.224.90.80 189.19.142.212.42126 LAST_ACK [14:59] tcp4 0 33077 208.86.224.90.80 189.19.142.212.42127 LAST_ACK [14:59] tcp4 0 33078 208.86.224.90.80 189.19.142.212.42128 LAST_ACK [14:59] tcp4 0 33079 208.86.224.90.80 189.19.142.212.42080 LAST_ACK [14:59] It's like that all the way down. [14:59] Just blocked him AND turned off the website for a moment [14:59] I blocked his subnet, because it feels good man [15:00] haha [15:01] It's getting there. [15:01] Another 3-4 minutes, it'll be down to normal, then I'll restart. [15:01] I love these toolbags. [15:01] thanks [15:02] WOAH SHIT THIS WEBSITE IS MIRRORED IN 15 LOCATIONS AND HAS BEEN ON THE NET FOR 14 YEARS I BETTER OPEN THREE BILLION CONNECTIONS AND SUCK IT DOWN NOW [15:02] AAAAAHHHH COULD GO ANY MORE [15:02] ANY SECOND NOW IT MIGHT DIE [15:02] AIIREEEEEEE I ATE SUGER FROSTED SUGAR THIS MORNING WHILE DRINKING QUIK [15:03] ^ this is the exact same conversation being had in fortunecity's secret IRC channel [15:03] AAAARRRGGGGIGIGIGIGIGIGG [15:04] haha. [15:05] Well, they're still at 200 connections, but bringing textfiles.com back. [15:06] whoops, was that me? [15:07] Hooray, my methamphetamine textfile is up [15:07] http://www.textfiles.com/drugs/himet1.txt [15:07] INTERNET SAVED [15:07] lol [15:07] I can't help but wonder how many textfiles we missed. [15:10] In terms of what. [15:10] When you say we missed, do you mean me? [15:10] Erk, looks like scraping errors in my digiplay.info data. At least I have the html cached now. [15:10] Because as far as I can tell, believe it or not, I got most of them, ultimately. [15:10] No, I miss you whenever I sleep. [15:11] Most of them? [15:11] Well, nearly all that were passed from BBS to BBS. [15:11] I wonder if the BBSes that are up today still have any we don't. [15:11] Or you don't. [15:12] However you want to say it. [15:12] So if someone is willfully trying to immortalise their community/content on IA, how best to go about that? [15:12] hiya Wyatt :) [15:13] Wyatt: going strong btw, over 2000 now! [15:14] Feel free to hammer on it all week if you want. Start a couple in parallel, even. [15:14] :D [15:15] SCREENSHOT ALL THE THINGS [15:15] Ha ha, this isn't spanish demo scene. [15:15] This is Spanish ATARI demo scene [15:15] Oh hawt [15:16] mmm, tilt that joystick [15:22] http://archive.org/details/spanish-demoscene-collection [15:23] Finally some Spanish content. [15:24] Por último, todo el mundo puede ser un idiota! [15:25] Everyone can be an idiot! [15:25] ú [15:25] lólólólól [15:25] TACO. [15:31] A weird thing of IA items is that don't show who is the uploader, so, you can't search for similar stuff using the uploader contributions list. [15:33] Agreed [15:35] root@teamarchive-0:/2/FRIENDSTER# du -sh . [15:35] 1.7T . [15:36] :D [15:40] 15 years later [15:40] 17:35:42 1.7P . [15:40] 17:35:42 root@teamarchive-0:/2/FACEBOOK# du -sh . [15:40] yes [15:40] Only 1.7? [15:40] Onl- DAMNIT [15:40] Haha [15:41] https://www.google.co.uk/search?hl=en&site=webhp&q=define+gigabyte [15:41] also [15:41] Facebook has got to be past 100PB by now, right? [15:41] All of this is nothing compared to when YouTube goes down [15:41] will someone click on the audio icon there and tell me that google is just trolling us [15:41] YouTube Collection [15:42] JYGABYTE [15:42] nitro2k01: let's not even think about that :< [15:44] I wonder if there's a real risk that Flickr goes down. I think not, since it seems to be one of the Yahoo services that are actually profitable. [15:44] Or, I would imagine so [15:44] Didn't they lay off most of the people working on it? [15:45] I don't know, just speaking of the data retention here [15:45] Seems unlikely they would just kill it like Geocities [15:45] Since quite a few people actually have those pro badges that means they pay up every year [15:48] I'd say Delicious is definitely likely to be amputated first among their remaining high-profile sites. [15:49] Wyatt: it already was [15:49] delicious is not owned by yahoo anymore [15:49] ...what, someone actually bought it?! [15:49] And myspace... Just think of all the flashy designs that were trashed overnight [15:50] http://longbets.org [15:50] Wyatt: yup, i believe it was the founder of youtube [15:50] I thought that was a joke. I...am rather surprised. [15:50] Somehow I missed the reality of it. [15:50] You did. [15:50] Hard. [15:51] And it was two of the founders of youtube [15:51] In April. [15:51] Of 2011. [15:51] yeah, just looked that up and found that [15:51] delicious was actually one of the best services that i have seen [15:52] all together now: "fuck yahoo!" [15:53] fuck the internet [15:53] Fuck everything [15:53] * nitro2k01 omniphile [15:53] http://i.imgur.com/PHzN9.jpg [15:54] FUCK IT WITHOUT DEPS! [15:54] Heh [15:54] haha [15:54] Right, now I have to google for what Yahoo even owns anymore. [16:01] shhh, we're fucking [16:02] lol [16:03] SketchCow: textfiles still down? [16:03] curl textfiles.com [16:03] curl: (7) couldn't connect to host [16:03] If it helps I just described AT as bearers of a "titanic black strap-on archival dildocannon"... [16:04] nitro2k01: so, what says they won't delete all of the free users content? :P [16:04] ...bahaha [16:04] nitro2k01: I mean, don't give Yahoooooo too much credit [16:05] They're stupid if they shut down a service that brings in the green stuff (this includes even free accounts) [16:05] Hey something is actually making money in our empire. LET'S SHUT IT DOWN! MOAHAHAHAHA! [16:05] They've demonstrated exactly how they work over and over again [16:05] Just because they get income on a project, doesn't mean they'll let it be [16:05] We do have #flickrfckr you know, just not grabbing everything continously ;p [16:06] Worst case scenario, it'll branch off, or Yahoo will go bankrupt and be split [16:06] You're now on the lulz list [16:06] Come back in 5 years when you've discovered I was right :p [16:07] textfiles.com is down. [16:07] I put the firewall block in the wrong place [16:07] LOVE the nerds all backseat humping me on how I should run my website [16:07] LOVE LOVE LOOOOOOOOOOOOOOVE IT [16:07] Love that. [16:07] put it in the cloud maaan [16:07] I'm not trying to tell you how to do it [16:07] I just wanted to read the meth textfile [16:07] Put it on the moooon! [16:08] Can I say that a lot? Can I say it in a way that just echoes in the back of your mind for days and days? Aspy nerds guffawing and saying what I should do a and b and c and d as and why it's better and so on? [16:08] Love it [16:08] I want to fuck it and make 10 of it and fuck those and make 100 of it [16:08] Sometimes aspie nerds are right [16:08] SOMETIMES [16:08] sometimes they're just loud fucking assholes though [16:08] A broken clock is right twice a day and also doesn't flip out when you move its juice box [16:09] And sometimes even both [16:09] in the same time [16:09] I think SketchCow just likes to fuck [16:09] regardless of what sentiment it is [16:09] SketchCow: :D [16:10] SketchCow likes to fuck because he's a dick <3 [16:11] point is I don't give a fuck if you're right in five years or not [16:12] because it doesn't matter [16:12] Right. What matters is say something positive about Yahoo -> lulz list [16:12] Or even just neutral common sense [16:13] Yahoo must be bashed [16:13] It's the rite of passage [16:14] right, totally [16:14] We'll leave it at that [16:15] inb4 someone highlights me in two hours and goes like "Well you see the real point is..." [16:17] SketchCow: holy shit that was brilliant [16:19] Me: Blue hair, silver tube top, fishnets, Knee high black biker boots. [16:19] You: Red mohawk, black pentagram gauges, viper piercings. [16:19] I was grinding on you in the pit, then we went to the bathroom, and got f***ed up. You had a nice c**k and I was wasted so I let [you] raw dog it in the stall. You were really good and you had to gag me so I would make too much noise. [16:19] Anyway I'm pregnant. It's yours. contact me if you want to be part of your child's life. [16:20] What's brilliant. [16:20] I came and farted. [16:24] SketchCow: hot [16:29] Oh here we go [16:29] Gigabytes of polish cd-roms [16:30] First one in! [16:30] http://archive.org/details/chip-cds will get them as they go [16:31] Good luck with that :P [16:31] http://archive.org/details/chip-cds-1997-0 added [16:31] yay [17:04] http://archive.org/details/chip-cds [17:04] awwww yeah [17:05] language attribute is wrong [17:06] Yes [17:06] That's the ingestor. [17:06] After it's done, I'll fix them like THAT [17:15] * SmileyG is so confused [17:15] so you run a website SketchCow / [17:16] or were you quoting someone? :D [17:19] I run a website [17:23] Almost done with the CDs! [17:23] aye [17:23] Wyatt is updating me elsewhere ;) [17:23] More and more things I can put to bed on batcave. [17:23] Nice video of you at Defcon. [17:26] Whoops, forgot 1998, fixing. [17:28] Have to say, your very good at presenting [17:35] For a moment I thought SketchCow had already ripped all my discs. [17:35] This is sad, but I don't understand most of English spoken talks. [17:35] That includes SketchCow presentations. [17:36] Language is a fucking barrier. [17:40] https://www.universalsubtitles.org/es/videos/NE0VZdfk5yzP/info/archive-team-a-distributed-preservation-of-service-attack/ [17:41] #subtitlesteam spread the word about backups writing subtitles in any language [17:48] who can help? [17:49] WTF IS UP WITH THAT [17:50] Bulgey McFishhat? [17:50] <3 [17:51] emijrp: have you tried googles auto translate+cc stuff? [17:51] SmileyG: sucks [17:51] Ah :S [17:55] Hnnnm [17:55] you guys got room to grab megaupload? :/ [17:56] Can't be "got" in its current state. [17:56] (Last I heard, at least) [17:56] EFF is fighting that fight. [18:00] yeah [18:01] Just, if someone turned up and went "Ok, you don't wanna pay for it, we can store it. Hand it over"..... once the legal issues are done, I think the hosting company would jump at the chance by the sound of things... [18:01] (funny how they were happy to take the money up until then ;)) [18:02] heh [18:02] one day, the archive will end up larger than the entire worlds current info :/ [18:06] I think that'll be a proud moment [18:14] hmm [18:14] I think I have a new found respect, and a bit of a man crush on SketchCow :O [18:16] SmileyG: HANDS OFF HE'S MINE [18:16] hehe [18:16] i wish I had the..... well, money he musth ave :D [18:16] I havel ike £4 spare a month :( [18:18] gayteam [18:18] no9t much I can do with £4 heh :D [18:21] £4/mo? That's not much [18:21] but it's something [18:22] Ergh. digigame.info uses completely different html encoding for its different content types, looks like I'll have to special case a bunch of stuff manually. Oh well. [18:23] mistym: wonderful! [18:23] i love that so much <3 [18:23] random content encoding <3 [18:23] p.s. if i see another UnicodeDecodeError i will actually burn an orphanage [18:23] e.g. journal articles (at least that I've seen so far) use a bunch of unambiguously named divs. Whereas proceedings articles use tables with unnamed elements. [18:24] winr4r: Oh yeah, that's the other thing I love - random unicode fails. [18:24] mistym: oh sorry i thought you meant content encoding [18:24] winr4r: I was ambiguous, my fault [18:24] My absolute FAVOURITE case is Excel for Mac. [18:25] It can export CSV that IT ITSELF cannot read because it uses some crazy text encoding. [18:25] haha! [18:25] that's beautiful [18:26] I thought I was doing something wrong when I couldn't figure out what encoding to give it in Ruby's CSV.parse. But no, Excel itself couldn't open the data it produced. Brilliant. [18:27] not relevant: http://youtube.com/watch?v=7odAbL3Ygts [18:27] mistym: it's an achievement of sorts [18:27] one way encoding \o/ [18:40] Well, that was encouraging [18:41] "We haven't seen names named, but the literature mentions companies working to provide DRM-free software for long-term preservation" [18:41] i'll believe that when i see it [18:41] i.e. never [18:41] winr4r: "encouraging" not "fucking awesome" [18:42] shaqfu: hi, btw :) [18:42] winr4r: ohai o/ [18:43] http://www.flickr.com/photos/djsmiley2k/4548258767/ [18:43] :O [18:43] my cat [18:43] is like [18:43] his cat [18:43] :O [18:43] http://www.flickr.com/photos/djsmiley2k/4488022986/ [18:44] You store your soap on the roof?! [18:44] SmileyG: Aww, your cat's a cutey [18:44] i got 4 [18:44] :O [18:44] he talks [18:44] :D [18:44] Same litter? [18:44] or at least tries to. He thanks you if you open the door for him. [18:44] lol Is Jason from Norwich, UK? [18:45] getting creepy [18:45] SmileyG: you're about six million miles out [18:45] SmileyG: are you from norwich? [18:46] No, but the cat was :D [18:46] SmileyG: oh! [18:46] It was originally my.... wifes brothers girlfriends grans [18:46] she couldn't look after it, she couldn't look after it, we lived with him and my wifes parents, and so he came with us [18:46] oh my god, sockington has figured out self-replication [18:46] :D [18:46] You talk some hours ago about the closing of YouTube. That closing has been happening since ages. Using a sample of 6500 videos about SpanishRevolution, 3.59% of them were deleted (or accounts closed) after 6 months. You can extrapolate to the YouTube age and the million videos are uploaded. [18:46] * winr4r just saw the photo [18:47] winr4r: Its freaky aint it? [18:47] their face is slightly different [18:47] SmileyG: it really is! [18:47] and apollo has smaller eyes [18:47] But the markings, wow. [18:47] Sorry, larger eyes, smaller iris [18:47] SmileyG: i wondered about norwich, i'm from king's lynn [18:47] winr4r: :D [18:47] norfolk best county in world [18:47] I have a old school friend who moved to king's lynn [18:48] SmileyG: on purpose?! [18:48] his family moved when I was..... 13? [18:48] moved to hunstantington? [18:48] ah [18:48] (I've spelt that wrong). [18:48] hunstanton [18:48] Yah. [18:48] mistym: In other news, Archivematica is really damn cool [18:48] Wild cats there... heh [18:48] hunstanton is nice, king's lynn is a massive shithole [18:49] shaqfu: Isn't it? Those guys are awesome. [18:49] shaqfu: They have an IRC channel over on Freenode, though it's not usually too busy. [18:49] Wait, no. Not Freenode, it was some other server. [18:49] mistym: I hadn't heard of it before this weekend, but it came up at nearly every talk this weekend [18:49] SmileyG: fortunately i'm about 6 miles south of it [18:49] who will log all of irc :S [18:50] SmileyG: Who will bug every public space ;o [18:50] Anyway, AFK, lunch [18:50] winr4r: ah [18:50] I'm [18:50] I'm in coventry... [18:50] don't suppose you also heard the sonic booms? [18:50] SmileyG: things will get better [18:50] and nope! [18:51] I really quite like cov :d [18:51] D: [18:52] * winr4r adores apollo [18:52] Sigh. Twitter is making me jealous. Not only are there tons of #marac tweets, but now Capy games are showing off the ridiculous 25-foot screen installation of Super TIME Force in LA. [18:53] winr4r: hehe [18:53] theres some pics on there of my other cats too [18:54] I don't think i've actually done a "cats" set hto ¬_¬ failure by me there [18:54] anyway, dads birthday meal tonight :/ [18:54] laters [18:54] bye! [19:05] mistym: I'll refrain from posting beach pics, then [19:06] ;o [19:06] Nothing like capping a conference with a beach trip [19:08] OH GOD [19:08] IS JASON GOING TO GO CLEAN SHAVED AGAIN [19:08] https://twitter.com/#!/textfiles/status/191239290158710785/photo/1 [19:09] *suspense* [19:09] (talking of twitter) [19:13] It feels surreal seeing him hatless [19:14] haha [19:16] that will pass [19:17] the hat is inside his hair, you will see it after the cut [19:17] emijrp: hahaha [19:17] Rofl [19:18] yes, I think jason will be bald in 1 hour [19:18] no way [19:19] i'm going with clean-shaven [19:19] no beard [19:19] no hair [19:19] it was important enough to announce on twitter, so i'm guessing it's the beard [19:26] oh, i was wrong! [19:27] Phew; balance of nature not disturbed [19:29] I was right [20:10] Oh good [20:10] hairblogging [20:15] Looking good [20:23] SketchCow: yes you do! :D [20:24] who wants some cp/m http://archive.org/download/cdrom-rlee-peters-cpm-archive/rlee_peters_cpm_archive.zip/ [20:27] Whoops, deleted two cds by mistake [20:28] Shiiiiiit happens [20:28] Ironically shoving it into archive.org [20:28] and I killed it [20:28] I make mistakes too! [20:28] wat no [20:28] Pretty commercial CDs, no worries, they'l show again. [20:30] OK, all those CDs have Polish as the language now [20:30] excellent! [20:57] I havent done bald recently [21:03] you haven't done no-beard in a while either [21:03] (you shouldn't, you're a whole lot less scary and Jason Scott without one) [21:05] ha ha [21:05] thanks for the fashion advice [21:06] be careful, here are more gays than archivists [21:08] i'm not gay! [21:09] the whole public bathroom thing was a misunderstanding [21:11] Whoops, fucked up AGAIN [21:11] Where's my hug [21:11] * winr4r hugs SketchCow! [21:24] winr4r: There were about ten FortuneCity sites on your list that we didn't have, but I have now downloaded those too. [21:25] alard: yay! [21:28] http://members.fortunecity.com/aaronsmom/ [21:29] :/ [21:29] found that while flicking through screenshots earlier [21:32] nice [21:34] it's a tribute for someone, by people who loved them, done as well as they could in the late 90s [21:37] curious, first image fail http://web.archive.org/web/20090203061353/http://members.fortunecity.com/aaronsmom/ [21:38] for a while fortunecity was doing referer blocking such that the wayback machine got their placeholder image for everything [21:38] DFJustin: ah [21:39] aaronsmom has got it going on [21:39] well among other things, that's where i hope the screenshot collection will be useful [21:40] this guy http://awt.ancestry.com/cgi-bin/igm.cgi?op=GET&db=lockard-park&id=I52598&ti=5541 [21:41] the downside: i had to disable javascript in the script i'm using, because their ads had a hilarious "slide up out of nowhere and cover up all the content" thing going on [21:41] Well fuck, I did it FUCKING AGAIN [21:41] emijrp: yes [21:41] Well, of 80 CD-ROMs, I murdered 4 in their beds [21:41] SketchCow: hey remember that thing you did three times? [21:41] Made a few choices with the scripting I shouldn't have. [21:41] i think it'd be a good idea to not do that [21:42] seriously though, what happened? [21:42] Pressing control-c during a zip-up makes it go "OK, stop running the zip, but keep running the script that calls it." [21:42] booooo [21:43] oh shit :< [21:43] Again, I'm not too worried [21:43] and "keep running" means "rm -rf"? [21:43] I can get these [21:43] Well keep running means rm that thing being zipped, yes [21:43] Normally I don't do that, got lazy, made mistake. [21:43] ah [21:43] so you didn't actually lose anything [21:43] Anyway, I'll just tell that guy we need to re-upload. [21:43] No, I definitely lost stuff that was at arm's reach [21:43] Dude must re-send [21:44] bummer [21:44] It's OK, we have a billion of these things going [21:46] somewhere, in my loft or piled under other piles of shit i have a couple of magazine cover CDs from the late 1990s [21:46] I just moved the shareware cd collection to the title bar of archive.org's software section. [21:46] It was time to do it. [21:46] Oh my god, I have so many cds, I am considering having someone come over or who is local to me to do it. [21:46] from the late 1990s where it's like "hey you don't have the INTERNET but here is SOME OF IT" [21:46] i need to get those to you some time [21:47] SketchCow: i can imagine, i have a *few*, at most like 5, but i think it was an interesting time [21:47] man, read this http://www.chron.com/CDA/archives/archive.mpl/1998_3052078/man-jailed-after-friend-shot.html [21:48] emijrp: moral of the story: don't be friends with stupid people [21:50] emijrp: WAIT HOLD ON AARON [21:50] Also: if a friend says "Hey, watch this!" and pulls out a gun, don't stick around. [21:50] Wow, one of the CD-ROMs has been downloaded 1,1442 [21:50] 1,442 [21:51] holy shit [21:53] how do archive.org do backups anyway? [21:53] i mean that is just an unbelievably huge amount of shit [21:55] they duplicate off-site [21:59] winr4r: they have the data on two nodes locally, and try to duplicate it off-site (like in alexandria) [22:00] And the question is, have they lost data? [22:00] i have to wonder if they regularly scrub items [22:01] (verify hashes against those in the files.xml file) [22:02] TOP SECRET. [22:02] They do things. [22:03] oh boy. some of these items are crap like full anime episodes [22:06] I just imagine a giant-arse RAID array [22:06] dunno why [22:06] ugh [22:06] no [22:09] has this been shared here yet? http://www.masswerk.at/googleBBS/ [22:11] "ERROR: Quota Exceeded. Please see http://code.google.com/apis/websearch" :< [22:11] Typical BBS error. [22:12] lol [22:21] Poor google [22:21] Getting DDOS [22:21] Coderjoe: You realize a lot of this is likely to go dark. [22:25] OK, CDs done [22:25] Going out to see the Comic-Con Documentary... with Morgan Spurlock presenting! And Q&A. [22:25] http://archive.org/search.php?query=collection%3Achip-cds&sort=-publicdate [22:26] SketchCow: yes. unfortunately [23:30] This is just morbid curiosity at this point, but that grep is still going.