#archiveteam-bs 2017-12-22,Fri

↑back Search ←Prev date Next date→ Show only urls(Click on time to select a line by its url)

WhoWhatWhen
***Jon has quit IRC (Quit: ZNC - http://znc.in)
RichardG_ has joined #archiveteam-bs
RichardG has quit IRC (Read error: Connection reset by peer)
[00:00]
icedice2 has quit IRC (Read error: Operation timed out) [00:06]
robink has quit IRC (Quit: No Ping reply in 210 seconds.)
tuluu_ has quit IRC (Quit: No Ping reply in 180 seconds.)
kristian_ has quit IRC (Quit: Leaving)
tuluu has joined #archiveteam-bs
robink has joined #archiveteam-bs
[00:17]
dd0a13f37 has quit IRC (Quit: Connection closed for inactivity)
robink has quit IRC (Read error: Connection reset by peer)
zgrant has quit IRC (Quit: Leaving.)
zgrant has joined #archiveteam-bs
robink has joined #archiveteam-bs
[00:29]
robink has quit IRC (Read error: Connection reset by peer) [00:47]
robink has joined #archiveteam-bs [00:52]
godaneSketchCow: i'm starting to upload my random captures of qvc japan [01:06]
.... (idle for 15mn)
***kyounko has quit IRC (Read error: Operation timed out)
Ceryn has joined #archiveteam-bs
[01:21]
wp494 has quit IRC (Read error: Operation timed out) [01:28]
zgrant has quit IRC (Quit: Leaving.)
zgrant has joined #archiveteam-bs
zgrant has quit IRC (Client Quit)
wp494 has joined #archiveteam-bs
zgrant has joined #archiveteam-bs
[01:34]
SketchCowhurrah [01:38]
godanei'm going to capture a ton of qvc japan over the next few days [01:40]
***wp494_ has joined #archiveteam-bs
wp494 has quit IRC (Read error: Operation timed out)
[01:46]
...... (idle for 25mn)
pizzaiolo has quit IRC (Remote host closed the connection) [02:14]
wp494_ is now known as wp494 [02:25]
........... (idle for 50mn)
Odd0002 has quit IRC (Ping timeout: 600 seconds) [03:15]
Odd0002 has joined #archiveteam-bs [03:24]
Odd0002 has quit IRC (Ping timeout: 506 seconds) [03:34]
Odd0002 has joined #archiveteam-bs [03:44]
......... (idle for 40mn)
wp494 has quit IRC (LOUD UNNECESSARY QUIT MESSAGES)
wp494 has joined #archiveteam-bs
[04:24]
Odd0002 has quit IRC (Ping timeout: 248 seconds) [04:39]
.... (idle for 16mn)
qw3rty118 has joined #archiveteam-bs
BlueMaxim has joined #archiveteam-bs
[04:55]
qw3rty117 has quit IRC (Read error: Operation timed out) [05:01]
.... (idle for 16mn)
zgrant has left [05:17]
Odd0002 has joined #archiveteam-bs [05:30]
.......... (idle for 47mn)
Odd0002 has quit IRC (Quit: ZNC - http://znc.in)
Odd0002 has joined #archiveteam-bs
[06:17]
Valentin- has joined #archiveteam-bs
Valentine has quit IRC (Ping timeout: 506 seconds)
[06:32]
jschwart has joined #archiveteam-bs [06:45]
jschwart has quit IRC (Client Quit) [06:50]
...... (idle for 28mn)
Mateon1 has quit IRC (Ping timeout: 260 seconds)
Mateon1 has joined #archiveteam-bs
[07:18]
kimmer12 has joined #archiveteam-bs [07:32]
kimmer1 has quit IRC (Ping timeout: 633 seconds)
kimmer1 has joined #archiveteam-bs
[07:38]
kimmer12 has quit IRC (Ping timeout: 633 seconds) [07:48]
kimmer1 has quit IRC (Ping timeout: 633 seconds) [07:53]
schbirid has joined #archiveteam-bs [08:06]
.......................... (idle for 2h9mn)
odemgSketchCow, godane could either of you put 30 new 750GB 2.5" Momentus Hybrid SSHD Drives to use? (on at/ia related things) [10:15]
jrwrodemg: yes, make a rsync box [10:28]
***Mateon1 has quit IRC (Remote host closed the connection)
Mateon1 has joined #archiveteam-bs
[10:36]
odemgjrwr, yeah I'll be getting these to whoever can do something like with them, document what they did and talk about it in a r/DataHoarder post [10:43]
schbiridnice [10:44]
.... (idle for 19mn)
***pizzaiolo has joined #archiveteam-bs [11:03]
..... (idle for 20mn)
Iglooodemg: Interesting. I'll happily upfront most of a server build for an rsync target in the US
Ideally in California
[11:23]
ranmafuck
i hate it when i miss some upload to a youtube channel i've been following
*when an upload gets deleted
[11:23]
odemgIgloo, exactly what we need [11:24]
IglooI'm not US based though, We'd need someone to help look at colo [11:24]
***Specular has joined #archiveteam-bs [11:25]
Specularso I've read flash memory cards have a terrible shelf life for retaining data without corruption. Hope my two year old backup of something isn't fucked. Will be buying a HDD tomorrow and transfer the contents. [11:27]
jrwrIgloo: there are a few out there, the best is if we can get hands on it there at the colo [11:27]
Specularbtw, for hdd brands it seems HGST is more reliable in those famouse BackBlaze results, but it's harder to compare to WD drives since they lack the same volume. I've always used WD without problems, but is HGST more reliable for long-term storage (even their 2.5" drives for ex)? [11:36]
jrwrOmfg this is a thing now https://www.reddit.com/r/spacex/comments/7lez5n/elon_musks_midnight_cherry_tesla_roadster/
That's the cat ON the payload mount
Car
[11:50]
..... (idle for 20mn)
Igloojrwr: Yep. We'd need someone who either lives local to be able to do it or some sort of IPKVM / power bar solution [12:11]
HCross2hmm [12:11]
Igloojrwr sending the car to mars huh, interesting [12:11]
HCross2I know where.. but they arent cheap at all [12:12]
IglooDefine not cheap [12:12]
HCross2not sure of an exact figure. I was thinking Psychz but they arent cheap [12:14]
IglooGeneral Button Pressing $0
How much is specific button pressing in order?
[12:16]
HCross2a million quid
per button
extra if the button is very deep
£5 million if it needs a paperclip
[12:21]
***icedice has joined #archiveteam-bs [12:34]
IglooI've asked for pricing for a total of 4u HCross2. 2 x 2u SuperMicro chassis, one 24 bay for storage / s3 and one 16 bay with disks and a load of NVME drives for the megawarc factory
If you then back to back the servers over 10G it'll be stupid fast.
[12:36]
HCross2ah nice [12:36]
IglooLets see. Be a good case and maybe a chance to use some of the crowd sourced funds that asparagirl was looking at [12:37]
***icedice has quit IRC (Read error: Connection reset by peer) [12:49]
icedice has joined #archiveteam-bs [12:55]
godaneSketchCow: so when is your new box of tapes getting shipped?
to me
[13:06]
jrwrOh man a new FOS [13:10]
JAASpecular: Backblaze's stats are interesting but essentially irrelevant for almost every use case. The drives are way outside the specs in those storage pods...
jrwr: Yeah, Elon's sending the car because nobody wanted to send a real payload due to the risk involved. At least that's what I read.
[13:22]
jrwrthat is correct
im 100% happy that he IS sending something cool
[13:23]
SpecularJAA, by that do you mean that they're being used far more than intended? [13:24]
JAASpecular: They're exposed to way more vibration, in particular.
Because there are so many other drives nearby.
Most of the drives they used aren't even rated for NAS usage. And if they are, it's usually only for up to 6 drives or something like that.
they use*
[13:25]
Specularit's interesting that I haven't seen that brought up in discussion of BB's results, but vibration would be a considerable factor for sure [13:28]
JAAIt's mentioned every time someone posts the stats on /r/DataHoarder, at least. [13:29]
.... (idle for 18mn)
***BlueMaxim has quit IRC (Quit: Leaving) [13:47]
.......... (idle for 48mn)
Stilett0 has quit IRC (Read error: Operation timed out) [14:35]
Stilett0 has joined #archiveteam-bs [14:48]
............. (idle for 1h2mn)
ola_norsk has joined #archiveteam-bs
MrDignity has quit IRC (Remote host closed the connection)
[15:50]
Odd0002 has quit IRC (Read error: Operation timed out) [15:58]
..... (idle for 20mn)
Odd0002 has joined #archiveteam-bs [16:18]
........ (idle for 38mn)
zgrant has joined #archiveteam-bs
Specular has quit IRC (Quit: Leaving)
[16:56]
........ (idle for 36mn)
icedice has quit IRC (Read error: Connection reset by peer) [17:36]
beardicus has quit IRC (bye)
beardicus has joined #archiveteam-bs
[17:41]
.... (idle for 16mn)
zgrant has quit IRC (Quit: Leaving.) [18:01]
ezJAA: its partially datahoarder meme (a bit like ECC on ZFS). what BB actually said is: we dont know if vibration of a lot of nearby drives matters (it probably does), what we DO know that consumer vs "nas rated" enterprise BOTH fail at the same rate in "hostile pod environment" - https://www.backblaze.com/blog/enterprise-drive-reliability/ [18:04]
ola_norskOdd0002: when in sqlite db, would perhaps having a seperate 'text' content table, to prevent storing identical texts in another table be benifical; Or could the operating of keeping an index of it all make it slower?
Odd0002: i'm guessing quite a huge percentage of tweets contain the exact same text content
[18:05]
JAAez: Right. They don't have many enterprise drives though. I'd like to see a proper statistical analysis of the data.
The uncertainty range on the enterprise drives would be huge.
(Or confidence interval, if we want to go by statistical lingo.)
[18:06]
ezJAA: its also quite possible that the ent rated drives ARE better, when subjected to milder conditions
and theres no difference only when subjected to extremes
[18:07]
JAATrue
I just remembered this analysis, by the way: https://hackernoon.com/applying-medical-statistics-to-the-backblaze-hard-drive-stats-36227cfd5372
[18:07]
ezanyhow, given that price per gb drops by what, 15%-20% a year? i guess the higher MTBF could worth the additional density
s/higher/worse/
[18:09]
JAANot sure if that includes any enterprise drives, too lazy to check all the model numbers right now.
Haha, price drops per GB, I wish...
[18:09]
ezit does, its just flatting out sigmoid [18:10]
JAAPrices here haven't changed much in years. It only started again in the recent months. [18:10]
ezit is approaching limit at ever decreasing rate, but it does drop [18:10]
JAAHere != US, just in case.
I'm curious to see what the next few years will bring though with MAMR and HAMR.
[18:10]
ezyea, theres a lot of weird markup and outright exotic market events
like those 4-6TB drives in USB frames
being way cheaper than the same thing, standalone
[18:11]
JAAYeah
One random article from Germany mentions that hard drive prices dropped from 0.09€ to 0.06€ per GB between 2012 and 2017. So apparently it did drop a bit (I didn't really notice that, but I also haven't bought many HDDs in the last few years), but definitely not 15-20% per year.
SSDs dropped from 0.99€ to 0.17€ over the same timeframe, by the way.
[18:11]
ezJAA: ssd and hdd are on different part of the sigmoid [18:16]
JAAYeah yeah, I know. [18:16]
***kimmer1 has joined #archiveteam-bs [18:17]
ezJAA: as for MAMR, that alone is supposed to flatten the sigmoid too, to the point it will be comparable to ssd perhaps
trouble being of course nobody does mamr yet, and second, mamr writes are slow, like really slow
https://regmedia.co.uk/2017/10/12/wdc_mamr_hdd_vs_ssds.jpg
its WD marketing department, so i'd take it with grain of salt, but doesnt smell of complete bullshit either
[18:19]
ola_norskOdd0002: here's the database 'schema' i have now https://imgur.com/a/gNxNh , could it be done even better perhaps? [18:24]
***jschwart has joined #archiveteam-bs [18:25]
ezola_norsk: why splitting the tweet text?
also, you need a table of who-follows-who to make that data useful
[18:26]
JAAez: I'm just hoping that the prices actually decrease in Europe again as well.
prices per GB, that is.
[18:27]
ezEU retail prices are kinda insane, yea
of everything computer basically
[18:27]
JAAHere's a compilation of prices per GB over the last ~20 years: https://blog.tralios.de/wp-content/uploads/2016/03/Festplattenpreise2016.png
In Germany
[18:28]
ola_norskez: the separate text/content table i think could prevent storing identical tweet content [18:28]
ezi call it "vat abuse"
sure, we DO have vat
[18:28]
JAAPrices now are essentially the same as ca. 2010. [18:28]
ezbut that does not explain the 30% markup on top of that
most of places have
[18:28]
JAAWhat does it have to do with VAT? [18:28]
ola_norskez: i'm not sure though, but e.g tweets containing just 'LOL!!' etc
ez: instead of storing a multiple of 'LOL!!" tweets, i mean
[18:28]
ezJAA: consumers who compare prices with US think "oh, thats just VAT, thats why its more expensive in EU"
but the reality is that we have simply much higher markups, too
[18:29]
JAAAh, yeah. [18:31]
ezola_norsk: the bloated index wont make up for such a "compression". you do, however, want to store retweets as reference in some way
retweets are usually stored as separate table of 'tweetid, whoretweetedit'. its somewhat awkward as you dont get a "timeline" of events in one table, but it is compact and query that way
s/query/fast to query/
[18:32]
ola_norskez: hmmm..might be able to pick out '@username' from the tweet texts. I think tweep does pick the first @' as sender, then include the others in content text [18:35]
ezoh yea, indexing threads is nice to have.
basically for each tweet you need to get 1) whom it refers to (multiple) 2) who retweeted it (multiple)
but things like the actual author of a tweet can be trivially part of the tweet itself, as well the other data you have now separate
[18:36]
ola_norski have to go by the output of 'tweep' for now i think https://ia801505.us.archive.org/24/items/tweeptestcrash/tweets.txt [18:38]
ezola_norsk: yea, with that its easier to just store one line = one row
especially if you use sqlite and usernames can be indexed simply be the text
[18:42]
ola_norskok [18:43]
ezthe retweet stuff is important only when doing full scrape. hashtag search will shows only original tweets
so no way the entries could be duplicate
[18:44]
ola_norskez: what i mean if theres found multiple tweets containing the exact same text content (example "43942641632403456 2017-12-21 20:33:58 CET <OfficialFayBla> Ur welcome " [18:46]
ezola_norsk: also try NetNeutralty and NetNeutralty [18:46]
ola_norskez if some other tweet also is just "Ur welcome " [18:46]
ezthe typos virtually always correspond to legibility of the users, ie you get cher-like tweets
NetNetruality and NetNeutralty i mean
maybe there are other common typos
ola_norsk: again, its "deduping" a problem which isnt
[18:46]
ola_norskez: but could storing uique texts prevent a lot of duplicates?
ok
[18:47]
***kimmer12 has joined #archiveteam-bs
odemg has quit IRC (Remote host closed the connection)
kimmer1 has quit IRC (Ping timeout: 633 seconds)
kimmer1 has joined #archiveteam-bs
icedice has joined #archiveteam-bs
[18:55]
ola_norskafter looking trough a couple of tweeted links, it seems by sending them all to waybackmachchine might cause some 'dubious' sites and pictues to be stored; Is that a problem? :]
dubious, as in adult content of various sorts :D
e.g what's the policy of IA of storing nudes?
ola_norsk don't exactly want to be banned for piping pr0n :D
[19:05]
***kimmer12 has quit IRC (Read error: Operation timed out)
kristian_ has joined #archiveteam-bs
[19:09]
ola_norskif _accidentally_ waybacking' adult material, does IA blame the submitter or the site that got waybacked?
e.g if a dickpick from a twitter feed got in there..
[19:13]
icediceWhat bit rate should I convert a 256 kb/s MP3 to AAC at for it to be somewhat lossless (yeah, I know it's lossy, I just want the best quality I can get out of that export) [19:22]
JAAFor best quality, highest bitrate available in AAC.
But really, why do you want to do that?
[19:22]
ola_norsk1kb lower with constant bitrate?
what JAA said, why re-encode
[19:22]
icedice320 kb/s is max in Adobe Premiere Pro
Because I want to export the video
I can't export it as an MP3 track
[19:23]
ola_norskincreasing bitrate on already compressed audio is futile [19:24]
icediceYeah, I figured [19:24]
ola_norskjust export them seperate, and use e.g ffmpeg to merge the two [19:24]
JAAYep, even with 320 kb/s AAC, you'll lose quality, but you get a larger file. [19:24]
icediceI'll just go with 256 kb/s
How would it be compability-wise? It's going into an MP4 container.
[19:24]
ola_norskicedice: if you export the video, and have the original mp3, ffmpeg can merge them [19:25]
JAAWhere do you want to play it? [19:25]
icediceidk [19:25]
JAAIf it's a computer, you can find a software that can handle it for sure.
Embedded systems, *shrug*
[19:26]
icediceIt wouldn't surprise me if the person who is going to get the video uploads it to YouTube and/or Facebook [19:26]
ola_norskthey recode anyway i think [19:26]
icediceI doubt they're into Vimeo or Dailymotion at least
Yeah
YouTube is AAC
And I figured that they do a crappier job than Adobe Premiere Pro
[19:26]
***schbirid has quit IRC (Quit: Leaving) [19:27]
JAAThey probably spent a ton of time optimising their transcoding.
Whether it's "maximum quality" is a different question though.
[19:27]
icediceYouTube has shit quality though [19:28]
JAAYep
I tell that to my girlfriend all the time, but she insists on using it anyway.
(For listening to music)
[19:28]
ola_norskanyway, i'm no expert by far, but i'd say #1 export the video (without audio) then use ffmpeg to merge the untouched audio with the video stream as e.g .MKV, and upload that
or mvk
[19:28]
icediceI've cut out parts of the audio track, so I'd need to do that again in like Audacity and then resave it [19:28]
ola_norski think working on it as wav is then the best [19:29]
icediceWhich I don't feel like redoing (and would have trouble getting exactly right compared to the video)
Yeah
I think I'll just go with AAC
The assignment is late enough as it is and my teacher seemed pretty pissed at me for uploading the project file instead of the video file as filler to stall the whole thing
[19:29]
ola_norski doubt a teacher would piss on audioquality if it's less than 100% shitty :D
not*
[19:31]
icediceThe video is fucked enough anyway
My phone stopped filming the 1 hour when it ran out of space
My classmate then started filming it with his phone
But the selfiestick/tripod hybrid pressed the power button and shut down his phone
So another classmate continued filming
TL;DR: There are two spaces in the video track that I filled with two video screenshots each since there's no video of it
Luckily the external mic worked well and got all of the audio
[19:31]
ola_norskif your teacher can hear the difference between 256kb/s audio and 320kb/s.. :D [19:35]
icediceThere's no 320 kb/s though
just 256 kb/s and bloated 256 kb/s
[19:36]
ola_norski mean if you're planning to reencode the audio [19:36]
icediceYeah
I'm going with 256 kb/s though
[19:36]
ola_norsk128kb is CD quality..
aye
[19:37]
icediceI though that was 192 kb/s
I was just wondering if it would have been possible to go with less kb/s than 256 and still achieve the same audio quality since AAC is better quality-wise than MP3
[19:37]
ola_norskthere's not point in going up in kbs on an already compressed audio file though. It would just make a bigger file, with same (or even a bit shittier) quality
but, there's no need to even touch the audio if you have the best possible copy of it
[19:38]
icediceJAA: Well, YouTube is convienient. I use it for music listening as well. Even if I had a headset that was high-end enough for me to hear the difference between YouTube audio and uncompressed audio I probably wouldn't notice anyway.
Yeah, well I was wondering about going down in kb/s
Like 256 kb/s MP3 = how much kb/s in AAC
[19:40]
ola_norskhttps://superuser.com/questions/277642/how-to-merge-audio-and-video-file-in-ffmpeg [19:40]
icediceQuality-wise
I'm pretty sure I have that in a .txt document from before
It's just having to cut it again that is a pain in the ass
I guess it could be possible in Avidemux if I had the time
[19:40]
ola_norsk'-c:a copy' would keep the audio from being recoded [19:42]
icediceI still need to cut out extra shit from the audio track
And it has to match the video track exactly, otherwise the lecturer will look like she's lip syncing
[19:42]
JAAicedice: Well yeah, it's not just that, also that you have to rely on internet connectivity and the music you want to listen to being available on YouTube.
The main reason for me personally is quality though.
[19:43]
ola_norskicedice: if you're plannint to use Youtube to show it anyway, it's going to be recoded no matter what at playback
planning
[19:44]
JAAtranscoded*, you mean, right? [19:44]
ola_norskthat
and i'd be frightened of a lecturer who would exclaim "Whait just a damn minute! ..This audio is 244kb/s, not 320!!"
that'd be golden-ears deluxe
[19:44]
icediceJAA: You can download it and demux it to AAC/M4A using JDownloader 2 or Youtube-DLG though. I do that sometimes.
And yeah, I should search for the FLAC files, I'm just a bit lazy with that atm
I'll have some FLAC torrenting marathon some day when I have time though
[19:55]
ola_norskwith 'youtube-dl -k' it keeps the video and audio [19:56]
icedicehttps://github.com/MrS0m30n3/youtube-dl-gui
^ I was talking about that youtube-dl GUI
[19:57]
ezyoutube-dl -f bestaudio -o out.m4a
its horrible tho, 128k aac iirc
[20:00]
ola_norski just use -k / --keep-fragments [20:00]
ezwhy? [20:00]
ola_norskto keep best audio present since youtube-dl often merges and deletes [20:01]
ez251 webm audio only DASH audio 142k , opus @160k, 3.59MiB
oh neat
youtube-dl -f bestaudio -o out.opus then
ola_norsk: huh?
no
ytdl preserves the bitstream unless you tell it to do something stupid, like output mp3
[20:01]
ola_norske.g 'youtube-dk -k <link>' will keep the audio and video seperate, whitout deleting them when merging into container
'youtube-dl -k'*
[20:04]
ezyea
and if you let it do its thing, that is just set output container compatible with the dash track, it will mux it from fragments into a single file
this is *not* transcoding
[20:05]
ola_norski find it useful when archiving a youtube video that's pretty must intersting speach/talk [20:05]
ezoh [20:06]
ola_norske.g this item https://archive.org/details/Tay_Zonday_Net_Neutrality_talk [20:06]
ezyour intent actually is to keep a/v separate
yea, that probably makes sense for talk show
[20:06]
ola_norskaye
that way it's possible just to listen to it, since it's just talk anyway with no important visual stuff
[20:06]
speaching if which, it would be cool if IA made secondary 'mediatype' possible for items, like with 'test item' that is also listed as both main 'community <type>' and 'Test Collection' :D https://archive.org/details/superfunky59_Series_of_Tubes_Music_Video
this item is audio, but _does_ contain video
[20:21]
icediceBtw, has Internet Archive gone to Canada yet?
They'd set up a backup facility there, right?
[20:23]
ola_norski think so
someone posted a link here once whre the infrastructure could be viewed
[20:23]
JAAola_norsk: That's not what the mediatype is about though. What you mean is that an item can be in multiple collections, and that's already the case I think. [20:25]
ola_norskah ok
it's not in collection other than 'Community Audio', but i'm guessing IA also goes by filetypes
[20:25]
icediceI don't get it why'd they'd set up shop in Canada
Not counting that it's a Five Eyes country it's geographically the US' neighbour and it piggybacks on the US' soon to be much worse Internet
[20:26]
ola_norskif it was made available, they should though [20:27]
icediceSwitzerland or something would have been better imo. Very good privacy laws and geographically distant and safe. [20:28]
ola_norski've been sending some emails around here in norway [20:28]
icediceBut I'm just a random guy on the Internet with >opinions, so what do I know [20:29]
ola_norskproblem is, i kind of need some sort of 'presentation'
i'm just a drunk fuck on an island on the westcoast of norway; So, it would need someone with a bit more 'umph' behind to be anything more
when i got the question (translated); 'have you talked to Brewster Kahle?' , i was damn close to writing back 'who the fuck do you think i am? All i asked you was your stance on a question!'
also, it seems to be a common belief that IA is merely 'waybackmachine'
so, a kind of offical 'pitch deck' would be very nice to present
but, according to 'Norsk Dataforening' (which is not small)..a Norwegian mirror is 'i like the idea'
followed up with 'what more can you tell me about it'..and there's my problem
[20:29]
icediceAre you pitching a Norwegian Internet Archive mirror or am I misunderstanding something? [20:37]
ola_norskso if someone with a bit more 'panash' .. http://www.dataforeningen.no/in-english.128921.no.html [20:37]
icediceBecause that would be pretty sweet [20:37]
ola_norskicedice: aye [20:37]
icediceI remember seeing a video on YouTube of some old mine in Norway that they had made into a data center
Looked pretty sweet
[20:38]
ola_norskaccording to their representative it's a 'good idea'..but i can not be the one carrying it further :D [20:38]
ezEU IA is interesting dillema
on one hand, storage hardware almost twice as expensive
on another hand, bandwidth is about 4x cheaper
[20:41]
ola_norskNorway is member of EU, we're kind of strange like that :D [20:41]
ezi guess ia is more constrained by storage than bw tho [20:41]
ola_norsknot*
aye, but e.g here in norway there's not only focus on preservation of old shit, but also 'green power'
[20:41]
ezthats probably fine, green power often means cheaper power these days
its funny for norway to be green obsessed, when they're arguably the biggest source of CO2 of all eu countries
not directly, but they originate that much oil none the less
[20:43]
ola_norskhehe, that might be true, but the gasoline is still more expensive here than the same gasoline when it's exported :/
same source though..wierd how that works :D
[20:44]
ezyoure a nordic country
nordic means insane taxes
[20:44]
icediceThe Netherlands, Germany, and France is pretty cheap for hosting though [20:44]
ezexcept iceland (?) for some reason
they were never true vikings to begin with
[20:44]
ola_norskdei e bedre enn svenska ;)
hehe, but damn, this is getting way off topic
back to topic: I've envisioned e.g https://greenmountain.no/
[20:45]
icediceYeah, that's one of the data centers I was just looking at [20:47]
ola_norskthat's where i started to nag first, on twitter.. ~7 months ago, never got a response..so, it needs someone bigger..And Dataforeningen is midly put quite big [20:47]
icedicehttps://www.youtube.com/watch?v=gYrvRMWiZCA
https://www.youtube.com/watch?v=aTjF2hJiack
https://www.youtube.com/watch?v=oN9on73BmSs
[20:48]
ola_norskall i know is that when an official representative of NCS writes back 'I like the idea, what more can you tell me about it?'..that's no small thing
".The Norwegian Computer Society turned 50 years in 2003"
it's needs a response with equeal punch though..sadly, i can not provide that :/
basiclly, every IT company of Norway is member of NCS
probably Green Mountain AS as well
at the very least, i need some sort of presentation endorsed by IA, or someone in IA, to send
[20:50]
Somebody2ola_norsk: As I think I mentioned the last time you brough this up -- you *do NOT need any permission* to mirror a whole bunch of IA. [20:55]
ola_norskcan't just say 'i like backups!' :/
Somebody2: i need a 'pitch' though
[20:55]
Somebody2Hm, not sure what you mean. [20:55]
ola_norskSomebody2: a kind of 'this is Internet Archive, and this is why our work is important' [20:55]
***kristian_ has quit IRC (Ping timeout: 360 seconds) [20:56]
ola_norskwhether it be video, article or powerpoint slides [20:56]
Somebody2Ah. Does the existing pitch currently be displayed in a large banner on every IA page not suffice?
What about textfiles's 30-days-of-neat-stuff-on-IA tweets?
Also, you could identify particular collections on IA that you want to suggest NCS mirror, and make a pitch based on those.
I think it would be FABULOUS if NCS dedicated a few dozen petabytes to mirroring some of the publically downloadable parts of IA.
And they could do that without ANY coordination or permission from IA. Just do it, then send an email afterward going, ...
"Hi, thought you should know we've made a mirror of all this, if you want to direct people to it.
[20:56]
ola_norskin my thought NCS would be the ones that swayed Norwegian Government to make sure a complete mirror exist [20:59]
Somebody2"And we'd be glad to mirror some of your restricted stuff, too, now that we've shown we can do the job."
Having NCS lobby the Norwegian government does make sense, yes.
[20:59]
ola_norskaye [21:00]
Somebody2My point is just that doing that does NOT require any coordination or involvement by IA.
At least for the initial dozen petabytes of mirrored data.
[21:00]
ola_norskwether it be educational department or culture/historical department, both of which have say in the matter [21:01]
Somebody2I'd reply back to the person who said it was a good idea, informing them that it doesn't require any coordination with IA, and ... [21:02]
ezSomebody2: in case of iabak got some serious traction, is it possible to count ia's *support* of that endeavor? [21:02]
Somebody2ez: Yes, IA is supportive of mirrors, as I understand (I don't have any formal connection to them, though). [21:02]
eznamely, better access to the current snapshot of ias data. the current query api works nice for individual items, but it gets awkward quick when things are done to be on scale this massive [21:03]
Somebody2ez: Well, the IA census seems to work well enough.
You are familiar with that, right?
[21:03]
ola_norskSomebody2: do you have an email where i might forward the emails to? [21:03]
Somebody2ola_norsk: what, the ones from NCS? Why do you want to forward them? [21:04]
ola_norskSomebody2: of the emails/replies i had with the person in NCS [21:04]
Somebody2Again, why forward them? You DO NOT NEED ANY HELP FROM archive.org FOR THIS. [21:04]
ezSomebody2: yes, im familiar with iamine from todd's effort to timestamp it [21:05]
Somebody2ez: good
ola_norsk: the next step is for you (and/or the person at NCS) to write up a proposal to the Norwegian government to fund storage.
ola_norsk: then just use the existing torrents provided by IA to mirror a bunch of stuff.
(storing it on the storage paid for by the Norwegian government)
[21:05]
ola_norskSomebody2: i have no connection to IA, no real say in the matter. Basically, when it comes to being a 'middle man' of getting established a complete active mirror of archive.org..I might not that middle-man that's needed. :D [21:08]
Somebody2There IS NO MIDDLE-MAN needed, as I keep telling you. [21:08]
ezSomebody2: also, i didnt see this explicitly stated anywhere, but is this data for archive.org/web/* as well [21:09]
Somebody2You don't need the permission, knowledge, or connection to ANYONE at IA to do this.
ez: The IA census does include hashes for the Wayback Machine data, in the "private" section (since the files aren't directly downloadable).
[21:09]
ezneat [21:10]
ola_norskSomebody2: how can a full copy of IA be made, functioning as a 'node' then ? [21:10]
Somebody2ola_norsk: Once a mirror of the publically downloadable data has been made (and paid for), *THEN* reach out to IA about mirroring the rest. [21:11]
ezso basically "serious iabak" would amount to 1) better access to WBM data 2) a bit saner query api to query diffs from the last time. iamine is kinda slow, and i dont see any reason for it to be
its just a fairly straighforward database dump
[21:11]
Somebody2ez: Eh, data on IA really shouldn't be changing regularly, so no, I don't think better access to diffs is much of a problem. [21:12]
ezSomebody2: i mean delta since the last time
ideally there would be some IA's official append log structure, not for people to awkwardly reconstruct it every time
[21:13]
Somebody2As I see it, the main next step for IA.BAK is clients for more platforms, that are easier to install, and a bunch of promotion to get lots more people to sign up. [21:14]
ola_norskSomebody2: that's kind of the problem as i see it.. ME, alone, reaching out is useless. I can barely reach the toilet in time when i have to take a piss. IA, like someone here said, is not a small thing. [21:14]
***sep332 has quit IRC (Read error: Operation timed out) [21:14]
Somebody2ez: A better log structure would certainly be nice, but I don't think it's a blocker for IA.BAK. [21:14]
ezits a blocker to do this in serverless fashion [21:15]
ola_norskSomebody2: i think the maximum of my effort and use would be to get someone in NCS and IA to contact eachother and talk further [21:15]
eziabak doesnt need to be centrally coordinated, it works perfectly fine as a stochastic endeavor
provided the input to the backed up space is uniform
which it isnt atm
[21:15]
***odemg has joined #archiveteam-bs [21:16]
Somebody2ola_norsk: Why, given that NCS *does not need any help from IA* in order to mirror a bunch of the content?
ola_norsk: I think the good use for your time and effort is to *inform* the person at NCS who you spoke to that they don't need IA's permission to mirror.
And encourage them to write up a grant proposal for server space.
ez: Let's not let the possiblity of a serverless architecture block progress on an existing backup.
[21:16]
ezSomebody2: given that current iabak stands at 0.5% progress to backup IA, its a bit premature optimization
im stipulating that looser coordination would yield better number than that
im not interested in "better" platform support, im interested in a client which doesnt need to coordinate at all
[21:18]
Somebody2ez: OK, but can you write such a client without any change to IA's existing infrastructure? If no, it's not as good as one that CAN be written that way. [21:19]
ezi can provided ia provides authoritative snapshot over the domain so the randomly picked items are uniformly random
basically current server architecture is p much result of IA not doing that
[21:20]
Somebody2Yes, but they don't (yet).
I also don't understand what you mean by "randomly picked items are uniformly random"
[21:20]
ezthey must be [21:21]
Somebody2?
We should also take this to the #iabak channel
[21:21]
ezah [21:21]
Somebody2er, #internetarchive.bak [21:21]
ola_norskSomebody2: here, the email exchange, https://archive.org/details/InboxStordabuenprotonmail-temp-item ..I'm neither an orginazer, spokes person or lobbyer of any imaginable sort. So, if someone are able to bring it further, that would be cool. [21:34]
Somebody2Ha. OK, well thanks for opening up the dialog in any case. [21:35]
ola_norskhopefully there's someone more eloquent than me to keep it going though [21:36]
***BlueMaxim has joined #archiveteam-bs [21:36]
Somebody2ola_norsk: Could you at least reply once more to let them know they can move forward WITHOUT any coordination with IA?
I think that's not expected (most organizations keep much tigher hold of their materials than IA does)
so letting the person you spoke to at NCS know that would be good.
[21:41]
ola_norskSomebody2: like i mentioned before, that person, like many other seems to think IA is just waybackmachine.."I'm quite aware of Waybackmachine" [21:42]
Somebody2ola_norsk: Sure; so informing them that there are petabytes of material on IA that they could arrange to mirror without ...
... any coordination with IA is really good to inform them of!
[21:43]
ola_norsksee, that's the way beyond the level of complexity of projects, where i 'peace out!' :D [21:44]
Somebody2ola_norsk: Wait, just writing a single email saying "You can download a lot of IA without permission" is a high level of complexity?
How is that any more complex than the emails you already wrote?
[21:47]
ola_norskbecause after the reply to 'could i get some more concrete information about the idea?'..i did not get response back... [21:49]
Somebody2ola_norsk: I see. [21:49]
ola_norskso sending a 2 email without having gotten response..that's no good in my book :D
2nd*
[21:50]
Somebody2Ha. Well, I don't feel like I should write to them, because I don't speak Norwegian and I don't live in Norway. :-( [21:51]
ola_norskpretty sure they know english ;)
heck, even i know proper english when i put my beer soaked mind to it, watching out for typod and coloquial terms
[21:52]
Somebody2ola_norsk: Yeah, but I feel like they'd respect me less. [21:54]
ola_norskthen we have a problem..
so, change.org then?
[21:54]
Somebody2Yep, if you feel you've worn out your welcome, and we don't have any other Norwegians interested in stepping up... [21:55]
ola_norski've heard rumours there was another one..but alas, no more than that :/ [21:55]
Somebody2Oh hell, I suppose I'll write something quick up. [21:56]
ola_norskcan't get shittier than mine :D [21:56]
Somebody2:-P [21:57]
ola_norska smashingly, awesome, mezmerizing 'pitch deck' though..the kind that could say the harshest of wallstreet investors..
sway*
that would be best
voice over by Alex Jones.. "Here's why data preservation is important for survival of the human specie!!!"
[21:58]
***sep332 has joined #archiveteam-bs [22:00]
ola_norskbut yeah, anyone else is better than me [22:00]
Harzileinhas there ever been a discussion of ia trying to infer locations from javascript (in before: halting problem)? [22:01]
ezyou mean a crawler with headless browser?
its resource intensive, but a lot of crawlers already do it. i f i were to guess, its not done as it would slow down crawl speed a lot.
[22:02]
Harzileinez: well, possibly a way for the crawler to dump asts, then look if someone registered a way to get file locations from that ast. it's not even obfuscated nor really 'computation', just a format that currently blinds ia
oh and that crawler would need to time travel too :/
[22:04]
***sep332 has quit IRC (Read error: Operation timed out) [22:05]
Harzileini want to look up an image output format that my national weather service dropped when they "moved to open data" :(((
and they only had it on ftp and clients had it on pages w/ javascript animations
[22:05]
ezgenerally, the only reliable way to run js these days is headless browser
phantomjs or headless chrome
'halting problem' is not the issue as such, its more like 'insane, obtusely baroque web platform as a whole problem'
[22:06]
Somebody2ola_norsk: OK, here's what I plan to write: https://0bin.net/paste/2+yIgWRGt6IUpjbp#twBLyDm6cHRXGXp-BYmeALC83pScu4311jR94QtyvNk
Please let me know any comments you have.
[22:09]
Harzileinthis is pretty much 1990's style code (it just uses >dom0 because it needs to cache images): http://web.archive.org/web/20110119230906/http://wetter.tagesschau.de:80/radarbilder/ [22:09]
ola_norsk"mirroring some" ? [22:09]
Harzileinthose were just re-scaled versions of stuff from https://www.dwd.de/DE/leistungen/gds/gds.html, which is phased out in favour of https://opendata.dwd.de/ [22:11]
Somebody2ola_norsk: Yes, that's how I read the response...
That they liked the idea of mirroring parts of archive.org
Ideally, all of it.
[22:11]
ola_norskyeah
the email is perfect
[22:11]
Somebody2But I certainly didn't see anything in their response that suggested they were *opposed* to starting by mirroring parts of it.
OK, cool, sending now.
[22:12]
ola_norski think IA needs some kind of public fact sheet, that shows that it's not just WayBackMachine :D [22:13]
Somebody2ola_norsk: Yeah, that would probably be good. [22:14]
ola_norskor preferably a video where kahle and scott fingers the storage while pointing it out :D [22:14]
Somebody2email sent. [22:15]
ola_norsk"here's the U's of wayback, here's the U's of videos and news'
Somebody2: did you send just the person who responded?
[22:15]
Harzilein.oO( here's the U's of backups of old scene releases ;)
Harzilein runs
[22:15]
Somebody2ola_norsk: Yes. [22:15]
ola_norsk..and that
ok
[22:15]
Somebody2Christian Torp. [22:16]
ola_norskyes he responed
i'm not sure what the title is called, 1 sec
[22:16]
Somebody2It doesn't matter. [22:18]
ola_norsk"Chief Operating Officer (COO)"
tone dalen did not respond when i wrote, but he did
welcome to beurocracy, i guess :D
[22:19]
Somebody2Somebody2 shrug [22:22]
ola_norskthe benefit of it though, is there so many instances to nag to :D
other than DCS there's also 'Arts Council' who also have a lot of say in such matters
http://www.kulturradet.no/english
Somebody2: let me know if you get a response on the email, though it's christmas now so it might take a while
[22:22]
***schbirid has joined #archiveteam-bs [22:30]
ola_norskfucking hell, i need to gooder up my formal english writing if you do :/
it's sad i'm the only norwegian presently here :/
there's not even a swede or a dane around?
seriously though, i hope to hear when/if you get a response
[22:30]
ezHarzilein: mirroring scene releases is doable. just rent a DC-hut in marshall islands (one of the few real-countries with no copyright laws)
then again, unauthorized copies kinda tend to "mirror" themselves
[22:37]
Harzileinez: huh? [22:40]
JAAI wouldn't be surprised if IA had its own stash of those. Darked, for obvious reasons. [22:40]
Harzileinthat's what i'm talking about
there's oldschool ones in unsystematic blobs. they are far more interesting ones than those with the nice emulator frontends :)
[22:41]
ezbbs era isnt that challenging yea. for starters, 20 years of data produced back then equals to week of data produced now. [22:43]
Harzileinanyway, my angle at this was its hard to get to "our" 10000 feet view that this is just "niche" data like any other, despite the providence
+across
-across
[22:46]
kimmer1ola_norsk a Dane here.. cheers [22:47]
ezHarzilein: most of it is garbage like current warez. the important bits (demo and tracker scene) is how IA actually started, didnt it? [22:48]
ola_norskkimmer1: skål :) [22:49]
ezHarzilein: if were talking cultural heritage wrt piracy, there are certain niches in that niche where archiving would be of very high value. things like St.GIGA games.
its a bit like "pirate" recordings of tv shows which otherwise are long lost in the history.
[22:50]
icediceola_norsk: Swedish speaking Finn here [22:50]
ola_norskola_norsk: perkele :D
oops
[22:51]
icediceThat's my reaction to Finnish [22:51]
ola_norskwell, that's twice a good as an actual swede :D
anyway, if there's some south americans and some asians, the globe is covered :D
[22:51]
icediceez: Private Layer is what a lot of pirate sites use for hosting. It's a Panamanian company that has servers in Switzerland (which is a pirate)
's paradise)
[22:53]
ezicedice: yes, there are few of shady isps catering to the unsavory markets
its funny how the actual scene on one hand shuns commercial sites (hosted in places like you mention), and on second hand it thrives on it
icedice: the supposed ethos is to stay under the radar. being herded by a provider "look, you can host your botnet/whatever here" is p much the opposite of that.
markets can sure play out in fun way
[22:54]
JAAThis is getting too offtopic for this channel. Mind moving to #archiveteam-ot? [22:57]
icediceThey're a bit too shady for my taste nowadays though: https://www.lowendtalk.com/discussion/71510/grupo-panaglobal-15-s-a-private-layer-drama-allegedly-james-reed-mccreary-alpha-red
Isn't #archiveteam-bs ment for off-topic stuff like this?
[22:58]
***M9uy3 has joined #archiveteam-bs [22:58]
M9uy3hi, how to start that project? https://www.archiveteam.org/index.php?title=Blog.pl [22:58]
ola_norskM9uy3: 1 sec [22:59]
JAAHey M9uy3. So the first step would be to get a list of all the blogs hosted on blog.pl.
That would probably mean grabbing all of http://www.blog.pl/katalog and creating a list out of that.
[23:00]
ola_norskit doesn't seem to be a specific task made for it "yet" [23:00]
M9uy3ok, only URLs? [23:01]
JAAWhat do the numbers there on the left mean? Are those numbers of blogs in the respective categories?
If so, we're talking about millions of blogs.
[23:01]
icediceez: What other countries are there that have no copyright laws? [23:01]
M9uy37752304 blogs in all categories [23:01]
JAAOh dear. [23:01]
icediceI think I've heard that Montenegro has none, at least [23:01]
M9uy3;)
it will be a great crash
[23:01]
JAASo more than every sixth Pole has a blog there??
(On average)
[23:02]
M9uy3the project is online since 2001 [23:03]
JAAHmm, ok, we'll have to think about how to do this then.
They shut down end of January, right?
[23:03]
M9uy3yes, 31th [23:04]
JAAMhm
The links on /katalog appear to point at the newest post for each blog. That might be a good starting point.
The image links, I mean.
We're probably looking at billions of links in total though. :-|
[23:04]
M9uy3the site is called 'blog.pl' but one can find there even school websites http://pspwasosz10.blog.pl/ :/ [23:07]
ola_norskeach of those links, linking an internal thingy/image usually have a single indefier do they not? [23:07]
JAAola_norsk: What do you mean? [23:08]
ola_norski mean, instead of billions of links, some of that billion might all be linking to same e.g picture/post etc
stored on that domain, i mean
[23:09]
JAANo, the billions I mean are probably unique, though quite many of them might be 404s.
I mean links like http://reniablicharz.blog.pl/?p=1660
Changing the p parameter leads you to other posts.
The next lower value that exists is 1654.
Which redirects to the second-newest blog post.
And so on.
The canonical post URLs look different and contain of a date and a slug, arranged as /YYYY/MM/DD/slug.
(Plus a slash at the end)
This will have to be a warrior project, but even then I'm not sure it's feasible. This thing is fucking *massive*.
[23:09]
ola_norskwhat if "Grupa Onet.pl SA" was willing to just give all the shit by closing time?
that could save a bit of work
[23:12]
M9uy3you mean export somehow? [23:13]
ola_norskaye
it doesn't hurt to ask
[23:13]
JAAFeel free to do so. [23:13]
ola_norsk(or demand, rudely) :D
kurwa, i do not speek polish :D
[23:13]
JAA"Give it to us, or we'll DDoS you!" :-P [23:14]
ola_norskthat [23:14]
M9uy3i've been already in contact with them today because of the second shutting down :) [23:14]
JAAWhich would actually not be too far from the truth lol. [23:14]
ezJAA: the blogs are just wordpress, nothing too spectacular there [23:14]
ola_norsk"Give it to us, or we'll DDoS your future endevours!" [23:15]
ezthe issue indeed is how to get the subdomain urls
there are blogid and blog_id entries, but not yet api call translating id to subdomain found yet
[23:15]
Somebody2ola_norsk: I will of course mention in the channel if I get a response. [23:15]
ezunfortunately the front page cant be scraped, it limits paging numbers to 100 [23:15]
JAAez: Yep, I know. This will be a good test for archiving Wordpress.com, which I assume will have to happen at some point and will be an absolute shitfest.
At least we have an easy way to find all blogs there though (through the wp.me shortener; we did that in URLTeam a while ago).
No such shortener here, unfortunately.
[23:16]
M9uy3so, there is a need of URL list and/or a possibility to reupload the content somewhere? [23:17]
ezthe mirroring itself is doable within the timeframe
the issue is how to find what to mirror
ie write a category spider for the front page is probably the best one could do for now
unless better api is reverse engineered
[23:18]
ola_norskSomebody2: good stuff. I think you will get answer, though maybe now at christmas time was the _worst_ time to write a mail to an organization :D [23:18]
JAAez: I don't see an API anywhere...? [23:19]
M9uy3there is no API [23:20]
ezJAA: there isnt
the id is in javascript for ad serving
so we know there *is* in fact numerical id per blog
[23:20]
ola_norskSomebody2: if not; If there's no response..which there were in my case, there's no fauly in eventually sending a new mail after a while [23:21]
JAARight [23:21]
M9uy3i wrote today earlier to them but the time is bad for such contacts (christmas) [23:21]
ola_norskSomebody2: fault* [23:21]
ezJAA: however everything on the frontpage seems to use subdomain urls [23:22]
M9uy3I asked them for URL list for http://republika.onet.pl/ subdomains - another project to be down (in March) - less blog, more in type of 'Geocities' [23:22]
JAANo occurrence of blog_id in any of the JS included on blogs either. [23:23]
ezits directly in the html
just viewsource
hmm, http://www.blog.pl/data/cache/thumb_270x200/data/post-images/74105/79552.jpeg
[23:23]
JAAYeah, I mean it isn't used anywhere. [23:24]
ezvar dige_vars = {"homepage_url":"www.blog.pl","category":"Spo\u0142ecze\u0144stwo","admin_url":"http:\/\/zarzadzanie.blog.pl\/krolowa-superstar.blog.pl\/wp-admin\/","template":"mystique","addthis":{"selector":".post-content","action":"append"},"blog_id":74105,"p
so the thumbs use blogid/postid format
[23:24]
Somebody2ola_norsk: Well, there isn't really any *need* for them to respond to me -- that was the main point of my email. :-) [23:25]
ezwhich is all nice, if there were something, anything, which could translate id to blog url [23:25]
JAAWe should make a channel for this.
We'll need one down the line anyway.
[23:25]
Somebody2They can simply go forward with getting funding for storage, then download a bunch of IA's stuff, and happily sit on it. :-) [23:25]
ez#blog.pls ? [23:25]
Somebody2I'd hope they'd drop me (and info@archive.org) a quick note to say, "Hi, we've made a copy of 10PB of stuff, thanks for making it available!" -- but it's not requried... [23:26]
ola_norskSomebody2: i'm not sure what you mean by that, but NCS is the major computer/it association in Norway. Basically every computer/tech related company is member...I guess it's kind of like the NRA of computer stuff here [23:27]
Somebody2ola_norsk: What I mean is that the point of my email was that NCS does not need to talk to me any more in order to mirror IA.
So if they don't respond, it doesn't mean they aren't, you know, mirroring IA.
[23:28]
ola_norskaye, they shoudln't..they can email archive.org themselves damnit
"what more can you tell me about the idea"..the fuckers should know how to google
[23:28]
Somebody2ola_norsk: They don't need to email archive.org EITHER.
That was what I keep trying to point out to you!
[23:30]
ola_norskthey sure as hell don't need to ask me about 'something more concrete' though :D [23:31]
***icedice has quit IRC (Quit: Leaving) [23:31]
Somebody2I think the "something more concrete" was hopefully along the lines of what I suggested. :-) [23:31]
ola_norskor, maybe i should have just said straight up: It might need a couple of square meter of datalockers and racks [23:32]
Somebody2Somebody2 going AFK
Yes, that probably would have been good. :-)
[23:32]
ola_norskaye, but my english, or rather, my technical norwegian is not that proficiant [23:33]
JAAez: Sounds good to me.
>>>>> Discussion on archiving blog.pl is now going on in #blog.pls
[23:34]
ola_norskSomebody2: the best i can do is try to get people and organizations with 'sway' to consider it :/
Somebody2: for all i know, e.g UIO.no have already pitched the idea..
[23:40]
geographical location, political standing globally, and it's focus on 'green energy' and the somewhat hysterically habit of wanting to preserving old useless shit..would be a plus [23:46]
***icedice has joined #archiveteam-bs [23:49]
ola_norskin norway, trying to build new close to e.g even an old wooden gate is sometimes a cause for years of controversy :D [23:51]
***M9uy3 has quit IRC (Ping timeout: 260 seconds) [23:53]
ola_norskeven 80s and 90s grafittis are at times deemed protected as 'cultural heritage'
the sad effect is, all the books in local libraries are old as fuck :/
[23:55]
***ola_norsk has quit IRC (I never hurts to ask. Merry christmas! https://youtu.be/wmin5WkOuPw) [23:59]

↑back Search ←Prev date Next date→ Show only urls(Click on time to select a line by its url)