#archiveteam-bs 2017-09-18,Mon

↑back Search ←Prev date Next date→ Show only urls(Click on time to select a line by its url)

WhoWhatWhen
***etudier has quit IRC (Quit: My MacBook has gone to sleep. ZZZzzz…) [00:02]
........ (idle for 39mn)
BlueMaxim has joined #archiveteam-bs [00:41]
......... (idle for 41mn)
Soni has joined #archiveteam-bs [01:22]
................ (idle for 1h18mn)
dashcloud has quit IRC (Read error: Operation timed out)
dashcloud has joined #archiveteam-bs
[02:40]
astridif IA doesn't have a copy of libgen then i'll eat my hat
if IA has failed to snag a copy of libgen then we should really reconsider our life choices
[02:45]
jrwrlibgen? [02:51]
astridthere was some discussion few hours ago [03:02]
hook54321jrwr: libgen = library genisis [03:04]
SketchCowWoooo [03:04]
hook54321Pretty sure this is the official site: http://gen.lib.rus.ec/
However this is the site listed on wikipedia: https://libgen.pw/
oh wait, that's one of them. DuckDuckGo was only showing one.
On a side note, whenever we email site owners asking them to cooperate with us, I recommend that we send it to another email address first to see if it ends up in spam. That happened with the owner of imgh.us.
[03:06]
secondDid you guys archive the-eye.eu?
It has a lot of data though...
[03:12]
hook54321We did not [03:21]
secondAre there plans to do so? [03:23]
hook54321About how much space does it take up? [03:24]
secondWell just the MSDN dump is 2.7TB [03:25]
hook54321eh [03:25]
secondAnother dump of comics from whenever (pretty much all the major studios) is about 3TB
Rom collection, not sure, pretty big I assume
Then there is the reddit rips they have
[03:25]
hook54321There is much stopping someone from uploading it to archive.org, maybe a mirror.
*isn't
[03:25]
secondWhat happens if I upload stuff, would the archive just delete it?
I think it should be archived but you'll need to wait til the copyright expires :/
[03:26]
mundussecond, I have a copy of it
It's onyl like 8TB
But most is not legal content
[03:27]
secondyes [03:27]
mundusand if it was going to be mirrored, archivist would do it [03:27]
secondOnly
archivist is the one who owns it
[03:27]
hook54321Disclaimer: Most of us are not employed by archive.org.
From what I've heard however, they wait until a copyright holder sends them a notice.
[03:28]
mundusYeah, if he wanted it on IA it would be on IA [03:28]
astridwe don't talk about copyright in here, folks
take it to #scared-shitless
[03:28]
hook54321we don't? [03:28]
Frogginghaha [03:28]
astridor maybe /r/legaladvice [03:28]
second#scared-shitless: Nick/channel is temporarily unavailable [03:28]
astridokay what's that tell you [03:29]
hook54321That there was a netsplit recently
I searched for the word "copyright" in the logs: 475 matches in 213 files
Lots of the stuff in the-eye appears to be porn. Still doesn't stop someone from attempting to upload it though.
[03:29]
secondDidn't know that
How is it "only" 8TB?
[03:39]
***arkhive has joined #archiveteam-bs [03:44]
......... (idle for 41mn)
pizzaiolo has quit IRC (Quit: pizzaiolo) [04:25]
.... (idle for 17mn)
kim__ has quit IRC (Ping timeout: 246 seconds)
Sk1d has quit IRC (Ping timeout: 250 seconds)
[04:42]
Sk1d has joined #archiveteam-bs [04:53]
Fletcher|Worth noting that IA standard procedure seems to be to dark an item instead of deleting it when a copyright claim is received [04:54]
jrwrCorrect [05:03]
Somebody2darking an item *may* mean that it's entirely gone, however. Or it may not. What it definitively means is that IA has ceased to *distribute* the item. [05:09]
...... (idle for 27mn)
***Asparagir has quit IRC (Asparagir) [05:36]
........ (idle for 39mn)
Mateon1 has quit IRC (Remote host closed the connection)
Mateon1 has joined #archiveteam-bs
[06:15]
schbirid has joined #archiveteam-bs [06:28]
.... (idle for 18mn)
robink has quit IRC (Ping timeout: 246 seconds)
robink has joined #archiveteam-bs
[06:46]
schbiridi threw medium.com into wpull and it OOMd :D [06:49]
......... (idle for 43mn)
***Honno has joined #archiveteam-bs [07:32]
....... (idle for 34mn)
Jonison has joined #archiveteam-bs [08:06]
.... (idle for 18mn)
BartoCH has joined #archiveteam-bs [08:24]
.... (idle for 18mn)
Mateon1 has quit IRC (Ping timeout: 260 seconds)
Mateon1 has joined #archiveteam-bs
[08:42]
..... (idle for 20mn)
Jonison has quit IRC (Read error: Connection reset by peer) [09:02]
...... (idle for 29mn)
icedice has joined #archiveteam-bs
icedice has quit IRC (Remote host closed the connection)
etudier has joined #archiveteam-bs
[09:31]
Jonison has joined #archiveteam-bs [09:39]
........ (idle for 39mn)
etudier has quit IRC (Quit: My MacBook has gone to sleep. ZZZzzz…) [10:18]
.... (idle for 16mn)
etudier has joined #archiveteam-bs [10:34]
.... (idle for 19mn)
etudier has quit IRC (Quit: My MacBook has gone to sleep. ZZZzzz…) [10:53]
...... (idle for 28mn)
pizzaiolo has joined #archiveteam-bs [11:21]
dashcloud has quit IRC (Read error: Operation timed out) [11:32]
dashcloud has joined #archiveteam-bs [11:39]
mls has quit IRC (Ping timeout: 250 seconds) [11:51]
mls has joined #archiveteam-bs [12:03]
........ (idle for 35mn)
BlueMaxim has quit IRC (Quit: Leaving) [12:38]
etudier has joined #archiveteam-bs [12:46]
plue has quit IRC (Quit: WeeChat 1.5) [12:52]
Jonison has quit IRC (Ping timeout: 260 seconds)
etudier has quit IRC (Quit: My MacBook has gone to sleep. ZZZzzz…)
etudier has joined #archiveteam-bs
[13:01]
mls has quit IRC (Ping timeout: 250 seconds) [13:09]
..... (idle for 20mn)
etudier has quit IRC (Quit: My MacBook has gone to sleep. ZZZzzz…)
dashcloud has quit IRC (Read error: Operation timed out)
[13:29]
SketchCowIt is not gone if dark'd. [13:40]
***mls has joined #archiveteam-bs
etudier has joined #archiveteam-bs
[13:44]
dashcloud has joined #archiveteam-bs [13:50]
etudier has quit IRC (Quit: My MacBook has gone to sleep. ZZZzzz…) [13:59]
etudier has joined #archiveteam-bs [14:10]
dd0a13f37 has joined #archiveteam-bs [14:17]
dd0a13f37hook54321: The official site is libgen.io (or the IP, 94.something), gen.lib.rus.ec is an official mirror which only has the metadata
hook54321: The official site is libgen.io (or the IP, 94.something), gen.lib.rus.ec is an official mirror which only has the metadata
second: In technological trouble, yes, but the operators will be fine. They have good opsec, have been doing this for 20 years, and the only one who isn't anonymous is a literal fugitive. They all live in the former soviet union too, so copyright is not a big problem there.
libgen.pw, b-ok, and bookza are unofficial mirrors. sci-hub is a sister project run by the aforemented fugitive using libgen as a storage backend and likely have backups too
The torrents are also decently seeded from various residential russian IPs, and there are probably more who aren't seeding the torrents since it's storage-bound
astrid: from what I can see, you're lacking a copy of sci-hub (aka sci-mag), which is really much more important than libgen
[14:19]
JAATIL someone thought it'd be a good idea to name a parasitic wasp after Elbakyan. [14:25]
dd0a13f37bit rude [14:25]
JAAYeah, that's what she said as well.
But regarding the urgency of backing up Sci-Hub: I thought it's just a frontend to libgen? What additional data is there on SciHub?
[14:26]
dd0a13f37well, I doubt they'll all be arrested at the same time since they're different projects
It's not that simple
sci-hub uses libgen as a backend
they have tons of "donated" accounts, and they cycle through them
and download articles
Scihub's articles are not in the main libgen collection
libgen is separated into sci-tech (libgen), comics, paintings, russian fiction ,foreign fiction, scimag
[14:27]
JAAOh [14:28]
dd0a13f37only sci-tech (libgen) is backed up afaik
maybe foreignfiction/rus fict too
[14:28]
JAAHm, I see. [14:28]
dd0a13f37look at the library genesis forum if you're curious about how it works
might be a good idea to use tor depending on where you live
and the libgen collection on IA is not complete from what I can see, https://archive.org/details/gen-lib&tab=about was last updated in 2016
[14:29]
***drumstick has quit IRC (Read error: Operation timed out) [14:33]
dd0a13f37and the libgen collection on IA is not complete from what I can see, https://archive.org/details/gen-lib&tab=about was last updated in 2016 [14:33]
JAAMar 2017 according to the graph, but keep in mind that this might not be the correct collection. [14:35]
dd0a13f37That's the only one with any amount of activity
Unless they store it under some other name
or don't make it public at all
[14:36]
***Soni has quit IRC (Ping timeout: 250 seconds) [14:37]
DFJustinthat's not how you see if things have been uploaded to a collection
there are items in that collection from 2 days ago
[14:45]
dd0a13f37How do you? [14:45]
DFJustinI don't know if there is a public way [14:46]
dd0a13f37Can you see what the name is? Is it something like 2092000? [14:46]
DFJustinr_1727000 [14:46]
dd0a13f37sci-hub can probably afford backups, they currently have 67 btc (USD $270k) in their bitcoin wallet, and their expenses are around "a few thousand" a month
that's an official torrent from 17-Aug-2017
http://libgen.io/libgen/repository_torrent/
[14:47]
SketchCowHey, remmeber the good times when I'd be able to answer Internet Archive questions helpfully
Before edsu implied that the Internet Archive banned him?
Those were good times.
How's that #internetarchive channel doing, anyway, now that I can't go in there
[14:47]
dd0a13f37Would it be possible for you to add the later ones? It's as easy as downloading http://libgen.io/libgen/repository_torrent/r0-2092.ZIP and the last few ones, then deriving if I understand correctly [14:48]
SketchCowOh, and why does edsu have op on #archiveteam again
When he wrote a whole essay with half-baked info about what the Internet Archive was going to do with robots.txt and got a wave of hatred?
Not that I'm going to jeopardize my job and ban him, or anything
[14:48]
dd0a13f37r_2093000-r_2105000 from the site I linked
Quick summary?
[14:49]
SketchCowGood times, good times
Heyyyyyy the Ted Nelson scans are going beautifully, and the CD-ROM scanning has a faster workflow
I paid $40 for a program that does nothing but crop
But it crops well!
[14:49]
Froggingdo one thing and do it well
:)
[14:51]
SketchCowThis does the one thing very well.
It's called "Batchcrop"
I can say "OK, for the big pile of TIFFs I just scanned... crop away all the white part of the scan, with a X amount of pixels in all directions around the "content", and save it."
[14:52]
***pizzaiolo has quit IRC (Quit: pizzaiolo) [14:54]
SketchCowSo basically, I can just keep shoving CDs into my scanner, one at a time, and just scan them each into a directory. [14:54]
***pizzaiolo has joined #archiveteam-bs [14:55]
SketchCowThe longest part now is typing in names for the scans so they either match CD-ROMs I put up on archive with no scan, or match to rips I just did of same. [14:55]
dd0a13f37Can't most image processing programs do that? Or does it have a sophisticated white space detection algo? [14:55]
SketchCowI lent a guy some CDs to do this... 2 years ago
He sheepishly brought the bin back to me last week.
I scanned and cropped all 86 in about 1.5 hours
I'm sure all image processing programs do something.
They are many like it but this one is mine
[14:55]
dd0a13f37For the really hostile sites, what about using commerccial proxy providers? I read about LJ on the wiki and they were apparently blacklisting your IPs
>For this project, set it to 1, beacuse LiveJournal tends to ban scrapers!
[14:59]
>Since 2015, Sci-Hub has operated its own repository , distinct from LibGen
If this is true (which I'm not sure of), then that might be why LG sci-mag torrents are unavailable
[15:10]
DFJustin<dd0a13f37> Would it be possible for you to add the later ones?
obviously somebody is actively working on it so there's no point in recruiting somebody else
[15:16]
dd0a13f37All you have to do is upload torrent and derive [15:18]
DFJustinput that energy into archiving something that isn't famous [15:18]
dd0a13f37and it's not actively maintained as far as I udnerstand
I am, I'm waiting on some email responses currently
[15:18]
..... (idle for 24mn)
***Mateon1 has quit IRC (Quit: Mateon1)
Mateon1 has joined #archiveteam-bs
[15:42]
........ (idle for 38mn)
schbiridhttp://www.bbc.com/news/uk-england-wiltshire-41267378 [16:21]
..... (idle for 23mn)
***odemg is now known as xbinwank
Honno has quit IRC (Read error: Operation timed out)
xbinwank is now known as odemg
[16:44]
dd0a13f37Anyone here speak/understand korean? [16:48]
......... (idle for 43mn)
***Asparagir has joined #archiveteam-bs
svchfoo3 sets mode: +o Asparagir
svchfoo1 sets mode: +o Asparagir
[17:31]
kristian_ has joined #archiveteam-bs [17:39]
dashcloud has quit IRC (Read error: Operation timed out)
dashcloud has joined #archiveteam-bs
[17:50]
dd0a13f37www.korean-books.com.kp/en/packages/xnps/download.pg.php?419 change "en" to "ko en fr sp de ru ch ja ar" to taste and 430 to any number <= 430
What's the proper way to archive something like this? Do you need WARC for what's just a GET request returning a file?
[17:54]
astridthe proper way is to gin up a list of urls and submit to archivebot with !ao < http://url/yourlist.txt [17:55]
dd0a13f37Thanks! [17:56]
astridthen you can download the warc when the job is done and extract everything from it, if you're so inclined :) [17:57]
dd0a13f37Okay, so what's the proper way when there's also metadata in XML and thumbnails? Parse separately or make script to rename them to their "real" names? [17:58]
astridhm? [17:59]
dd0a13f37They're named like 00000412.pdf
But they have names
one second, site takes a bit to load
They have names like "UNDERSTANDING KOREA (9) (HUMAN RIGHTS)"
Also metadata
"- Book on Common Sense -"
"Foreign Languages Publishing House"
"87 pp"
and an image
This won't be saved if you just have them archive a link list
[18:00]
DFJustinif you're motivated / have skills, the best way would probably be to upload each pdf as a separate IA book item with metadata [18:02]
astridwell i'd add the xml files to the link list then [18:02]
***fie has quit IRC (Read error: Operation timed out) [18:02]
dd0a13f37It's not XML, you issue a POST request and get an entire page as HTML
So you'd have to parse it
I have neither the skills and there are a few thousand
[18:03]
astridohh [18:12]
dd0a13f37https://pastebin.com/parEbjPK this is what it looks like
after parsing
you send a base64 encoded json dict
and get back a json dict
containing the page html
and it's escaped with backslashes two or three times
[18:12]
astridthat sounds like a delight [18:14]
dd0a13f37check out their homemade CMS
It's stateful, you set which language you want, it saves it server-side
[18:14]
***ReimuHaku has quit IRC (Ping timeout: 250 seconds) [18:15]
dd0a13f37!ao < https://my.mixtape.moe/nsrkrj.txt
like this?
[18:18]
astridyou need http:// at the front of your urls
or https://
or ftp://
depending
[18:19]
dd0a13f37thanks
!ao < https://my.mixtape.moe/tktryb.txt
[18:20]
astridthese uh
aren't exactly pdfs
[18:21]
dd0a13f37They are
Or does not handle content disposition?
[18:22]
astridthey're pdfs with a sql statement at the front ??? [18:22]
dd0a13f37I can open them just fine [18:22]
astridhm
maybe pdf doesn't mind about that
[18:22]
***ReimuHaku has joined #archiveteam-bs [18:23]
astridthey seem to all start with [18:24]
dd0a13f37oh yeah I see [18:24]
astridUpdate PublicationList_ko Set pVisitCount="2" Where pId=127%PDF-1.4
it's uh
[18:24]
dd0a13f37sqli [18:24]
astridnice job folks [18:24]
dd0a13f37There are numerous other vulnerabilities too [18:24]
astridfigures [18:25]
dd0a13f37There's an undocumented way to register an account on KCNA [18:25]
astridokay, well, go ahead and submit that job in #archivebot [18:25]
dd0a13f37which appears to do nothing
but it actually registers you
[18:25]
astridlol [18:25]
dd0a13f37and you can log in
and the only thing it does
is add some tracking code
you don't even show up as logged in
there is also a random zip file serving malware
[18:25]
.... (idle for 19mn)
Well, I can't get it to work. Any pointers? It needs a timeout of maybe 5 minutes for the first request, then some IP whitelisting or something happens
So just forcing IA to do a request would be fine
[18:44]
astridIA doesn't run archivebot :) [18:44]
dd0a13f37Does it use IA IPs? [18:45]
astridno
we run archivebot
[18:45]
dd0a13f37Does it share an IP with anything else? [18:45]
astridit's a bunch of machines, run by several people in this channel
generally they have dedicated IPs, but multiple grabbers run per host
[18:45]
dd0a13f37Do you run one of them? Can you force it to use a certain machine? [18:46]
astridyes and yes [18:46]
dd0a13f37Do you have SSH access/similar? [18:46]
astridi wasn't getting that whitelisting effect, btw
it may be that you've got browser keepalives going on
[18:46]
dd0a13f37Nope, they do connection:close
I might be mistaking it for something else, but wget takes a long time (minutes) if it even does it
and ff is instant
[18:47]
astridarchivebot is more similar to wget than to firefox [18:49]
dd0a13f37yeah
oh, apparently I have a phpsessid
well that explains it
"Apache/2.2.15 (RedStar 3.0)", how does it even work
Does it just randomly time out requests?
I managed to get one with wget now, connecting took 20 seconds and downloading 2m:20s (at 15 kbit)
[18:49]
***zyphlar has joined #archiveteam-bs [19:01]
dd0a13f37Well, I can't wrap my head around north korean web magic [19:09]
***dd0a13f37 has left [19:10]
........ (idle for 38mn)
superkuhhttps://www.eff.org/deeplinks/2017/09/open-letter-w3c-director-ceo-team-and-membership "Effective today, EFF is resigning from the W3C." [19:48]
astrido_O [19:49]
JAAWow
Ah, the DRM bullshit, right.
[19:50]
..... (idle for 20mn)
***schbirid has quit IRC (Quit: Leaving) [20:11]
hook54321holy crap
I imagine this event will be a bit different now. https://twitter.com/internetarchive/status/909868291249684480
[20:20]
***kim_ has joined #archiveteam-bs [20:27]
..... (idle for 21mn)
Dark_Star has quit IRC (Remote host closed the connection) [20:48]
..... (idle for 23mn)
zyphlar has quit IRC (Quit: Connection closed for inactivity) [21:11]
.... (idle for 18mn)
Darkstar has joined #archiveteam-bs [21:29]
noirscape has quit IRC (Read error: Operation timed out)
zino has quit IRC (Quit: Leaving)
[21:43]
hook54321https://www.youtube.com/watch?v=h94ZKGVg-B8
I think we should post something about this on the ArchiveTeam twitter account.
[21:46]
godanewho wants to start building rpi librarybox boxies? [21:56]
***BlueMaxim has joined #archiveteam-bs
balrog has quit IRC (Read error: Operation timed out)
JAA has quit IRC (Read error: Operation timed out)
C4K3 has quit IRC (Read error: Operation timed out)
ruunyan has quit IRC (Read error: Operation timed out)
squires has quit IRC (Read error: Operation timed out)
ZexaronS has quit IRC (Read error: Operation timed out)
rocode has quit IRC (Read error: Operation timed out)
ZexaronS has joined #archiveteam-bs
JAA has joined #archiveteam-bs
swebb sets mode: +o JAA
wp494 has quit IRC (Read error: Operation timed out)
squires has joined #archiveteam-bs
balrog has joined #archiveteam-bs
swebb sets mode: +o balrog
REiN^ has quit IRC (Write error: Broken pipe)
wp494 has joined #archiveteam-bs
PotcFdk has quit IRC (Write error: Broken pipe)
ruunyan has joined #archiveteam-bs
REiN^ has joined #archiveteam-bs
tfgbd_znc has quit IRC (Ping timeout: 600 seconds)
tfgbd_znc has joined #archiveteam-bs
rocode has joined #archiveteam-bs
drumstick has joined #archiveteam-bs
C4K3 has joined #archiveteam-bs
PotcFdk has joined #archiveteam-bs
wabu has quit IRC (Ping timeout: 246 seconds)
[21:59]
ola_norsk has joined #archiveteam-bs [22:20]
hook54321godane: What are those? [22:21]
ola_norskis posting links possible? [22:21]
astridyes definitely [22:21]
ola_norskok, one sec
https://pbs.twimg.com/media/DKCW8SnWkAIgpqn.jpg:large
that is the result of the attempt
but, let me get the url to the tweet status, so you dont need to retype it from image
https://twitter.com/JeffHollandaise/status/897970096429084672
[22:21]
astridhm, for some reason twitter has decided that you're coming from germany [22:24]
ola_norskI can view this url..but can't archive it. When i try, i get german twitter [22:24]
astridi'm not sure how it decides that
probably the source IP that archive.org is using looks like a german IP
[22:24]
ola_norskyes, it's not me [22:25]
astridyou are more than welcome to join #archivebot and do
!ao https://twitter.com/JeffHollandaise/status/897970096429084672 --ignore-sets=twitter
er, also the --phantomjs option
[22:25]
ola_norskill check it out. ty [22:25]
***wabu has joined #archiveteam-bs [22:26]
ola_norskbut, i have to ask..what difference would it really make? [22:26]
astridarchivebot is run by us, and i haven't seen any german redirects affecting it
(archiveteam is not the same as archive.org, we have completely different infrastructure)
[22:26]
ola_norski mean looking like a german IP, would there be any difference in it working or not? [22:27]
astridoh
uhhh, it shouldn't redirect
what do you want to happen exactly?
the --phantomjs option will pull in the css and images and javascript so it'll look and work correctly
doing it with archivebot will make sure it gets run from a jurisdiction where twitter won't screen out nazi imagery
[22:27]
ola_norski would expect waybackmachine to archive like regular
ok
[22:28]
astridwayback machine's liveweb feature usually works well but sometimes has some issues
twitter is a difficult website to archive
[22:28]
ola_norski've had no problem so far i think [22:29]
astridhm okay
maybe it's because the tweet has nazi imagery in it, i know they filter that sort of thing out in some places
[22:29]
***atluxity has quit IRC (Ping timeout: 506 seconds) [22:30]
ola_norskso in german twitter, nazi imagiry (i havent looked close to see if there was any), is screened? [22:30]
astridsometimes?
it's not clear
i mean there is some nazi/kkk shit in that tweet
[22:30]
ola_norskok
anyway, thanks for the help. I can't stand nazism myself, but this was really frustrating
[22:31]
astridi'm not a fan either ...
yeah
but yeah. #archivebot is a channel on this network where we operate an irc bot that lets you submit links for archival
[22:33]
ola_norskbtw, i also tried to previously to archive my own https://pbs.twimg.com/media/DKCYoW-W4AAsH_T.jpg:large
and i can't see how i'm pegged as a nazi
[22:38]
Lagittajawell, looks like my home "server" build completes faster than I expected. scored a nice (imho) motherboard from the same seller I got the i3-2120 from. intel's dq67ow [22:40]
astridola_norsk: maybe hm maybe actually, that looks like archive.org's ip space has been blocked from using twitter without logging in [22:40]
***bluesoul has quit IRC (Read error: Operation timed out) [22:41]
astrid:( [22:41]
Lagittajahaven't had much experience with Intel's boards in the past other than the DH77EB in my mother's HTPC which actually has been rock solid for the past 4+ years. and this thing was 32, including shipping. not too shabby [22:41]
***svchfoo1 has quit IRC (Remote host closed the connection)
bluesoul has joined #archiveteam-bs
[22:41]
astridLagittaja: i think that's completely offtopic for this channel [22:41]
***svchfoo1 has joined #archiveteam-bs [22:42]
Lagittajawell sorry astrid, I have been having a conversation about this build with another person on this channel and I intend to use it to put more horse power for archiving. so sorry I'll see myself out [22:42]
***Lagittaja has quit IRC (Quit: Leaving)
svchfoo3 sets mode: +o svchfoo1
[22:43]
astridah, sorry, i didn't know [22:48]
***kristian_ has quit IRC (Quit: Leaving)
ola_norsk has left
[22:54]
..... (idle for 23mn)
BartoCH has quit IRC (Quit: WeeChat 1.9) [23:17]
godanehook54321: i'm working on a project to add kiwix to slackwarearm 14.2
https://archive.org/details/slackwarearm-14.2-20170906-kiwix
[23:18]
***drumstick has quit IRC (Read error: Operation timed out) [23:22]
joepie91_https://twitter.com/xor/status/909888462584795136 [23:23]
godanei now just need to write a script to mount /dev/sda2 and look for something like /mnt/data/kiwix for all the kiwix files
i have another script to build the library.xml file in /mnt/data/kiwix folder
then its kiwix --library $path/library.xml --port 8000 --daemon somthing
[23:24]
***Soni has joined #archiveteam-bs [23:33]
.... (idle for 15mn)
fie has joined #archiveteam-bs [23:48]

↑back Search ←Prev date Next date→ Show only urls(Click on time to select a line by its url)