[00:07] SketchCow: i'm up to 2016-01-31 [00:07] with kpfa [00:09] *** Stiletto has joined #archiveteam-bs [00:37] Great [00:39] xmc: http://fos.textfiles.com/ARCHIVETEAM/ starts to really show the amount of data going in. [00:39] I think 300-500gb daily is pretty possible as an average. [01:31] *** decay has quit IRC (Ping timeout: 250 seconds) [01:33] *** decay has joined #archiveteam-bs [01:34] *** BlueMaxim has quit IRC (Read error: Operation timed out) [01:35] *** JesseW has joined #archiveteam-bs [01:41] *** slpeeds has joined #archiveteam-bs [01:48] *** fdo54ss has quit IRC (Ping timeout: 633 seconds) [02:00] *** Honno has joined #archiveteam-bs [02:12] *** bwn_ has joined #archiveteam-bs [02:13] *** Coderjoe_ has quit IRC (Read error: Connection reset by peer) [02:22] *** JesseW has quit IRC (Ping timeout: 370 seconds) [02:26] *** bwn has quit IRC (Read error: Operation timed out) [02:33] *** Coderjoe has joined #archiveteam-bs [02:36] *** Honno has quit IRC (Read error: Operation timed out) [03:15] *** JesseW has joined #archiveteam-bs [03:19] *** brayden has joined #archiveteam-bs [03:19] *** swebb sets mode: +o brayden [03:24] *** bwn has joined #archiveteam-bs [03:24] *** bwn_ has quit IRC (Quit: Quit) [03:29] *** jspiros has quit IRC (Read error: Operation timed out) [03:29] *** wyatt8740 has quit IRC (Read error: Operation timed out) [03:30] *** SadDM has quit IRC (Read error: Operation timed out) [03:30] *** SN4T14 has quit IRC (Read error: Operation timed out) [03:30] *** mr-b has quit IRC (Read error: Operation timed out) [03:30] *** chfoo- has quit IRC (Read error: Operation timed out) [03:30] *** remsen has quit IRC (Ping timeout: 246 seconds) [03:30] *** matthusby has quit IRC (Ping timeout: 246 seconds) [03:31] *** ErkDog has quit IRC (Ping timeout: 246 seconds) [03:32] *** wyatt8740 has joined #archiveteam-bs [03:32] *** yakfish has quit IRC (Ping timeout: 246 seconds) [03:33] *** bwn_ has joined #archiveteam-bs [03:35] *** bwn has quit IRC (Ping timeout: 492 seconds) [03:37] *** remsen has joined #archiveteam-bs [03:38] *** chfoo- has joined #archiveteam-bs [03:38] *** SN4T14 has joined #archiveteam-bs [03:38] *** ErkDog has joined #archiveteam-bs [03:53] *** mr-b has joined #archiveteam-bs [03:55] *** bwn_ is now known as bwn [03:56] jessew: this ia_census had me fooled for a bit, it had looked like it stopped short at ~3%, but i'm seeing 591697/592268 identifiers processed in that list [03:59] yeah, that was an error in the pv arguments [03:59] -l tells it to count by line, but it still picked up the *size* from the file size, not the number lines in the file. [04:00] Mine got stuck near the end; if it's not making progress for a while, you might try killing it and doing another run with just the ones missing from the first run [04:00] bwn: [04:06] *** pwnsrv has joined #archiveteam-bs [04:43] *** bwn_ has joined #archiveteam-bs [04:57] *** bwn has quit IRC (Read error: Operation timed out) [05:00] *** Sk1d has quit IRC (Ping timeout: 194 seconds) [05:05] *** Sk1d has joined #archiveteam-bs [05:11] *** Honno has joined #archiveteam-bs [05:27] *** fie__ has quit IRC (Read error: Connection reset by peer) [05:28] *** fie__ has joined #archiveteam-bs [05:41] *** vitzli has joined #archiveteam-bs [06:02] *** BlueMaxim has joined #archiveteam-bs [06:10] SketchCow: we are up to 2016-02-29 with kpfa now [06:11] we are now behind them by just 6 weeks [06:18] *** godane has quit IRC (Quit: Leaving.) [06:21] *** godane has joined #archiveteam-bs [06:49] *** bwn_ has quit IRC (Quit: Quit) [06:50] *** bwn has joined #archiveteam-bs [06:50] *** Honno has quit IRC (Read error: Operation timed out) [06:51] *** JesseW has quit IRC (Ping timeout: 370 seconds) [07:35] *** schbirid has joined #archiveteam-bs [08:44] *** vitzli has quit IRC (Quit: Leaving) [09:44] *** vitzli has joined #archiveteam-bs [09:51] *** godane has quit IRC (Leaving.) [09:53] *** godane has joined #archiveteam-bs [10:15] *** bwn has quit IRC (Read error: Operation timed out) [10:21] *** ohhdemgir has joined #archiveteam-bs [10:31] *** bwn has joined #archiveteam-bs [11:28] *** BlueMaxim has quit IRC (Quit: Leaving) [13:03] *** Medowar has joined #archiveteam-bs [13:28] *** balrog has quit IRC (Ping timeout: 260 seconds) [13:29] *** balrog has joined #archiveteam-bs [13:29] *** swebb sets mode: +o balrog [13:59] *** Honno has joined #archiveteam-bs [14:46] *** ohhdemgir has quit IRC (Read error: Operation timed out) [15:08] *** Honno has quit IRC (Read error: Operation timed out) [15:40] *** metalcamp has joined #archiveteam-bs [15:57] Government Information Films should start appearing here https://archive.org/details/PublicInformationFilms [16:01] hey is there something like winzip for WARC? I mean where you can just make it spit files like /index.html, /header.jpg and so on? [16:01] warcat [16:01] thanks [16:03] *** Yoshimura has joined #archiveteam-bs [16:30] *** Honno has joined #archiveteam-bs [16:35] What are the most pressing problems of AT? I noticed the mirror project, and that a lot of pipelines stall over out of space. [16:36] I believe the must be a way to overcome space shortage on pipelines, as there are places that are storage dedicated. [16:37] this starts with a North Korean weather forecast https://archive.org/details/archiveteam_videobot_KCTV_20160414_0747 [16:38] *** JesseW has joined #archiveteam-bs [16:42] The most interesting part is that the Korea is not split in half. [16:46] IMO, the most pressing problem in Archive Team is that we have a lot of people to maintain the infrastructure but we cannot get a lot of time out of those people [16:47] Doesn’t KCTV broadcast 24/7? https://archive.org/search.php?query=creator%3A%22Radio+and+Television+Broadcasting+Committee+of+the+Democratic+People%27s+Republic+of+Korea%22 has lots of test images. [16:47] by maintain the infrastructure I refer to documentation, ops, software development, etc [16:47] Apart from my health/life problems, which make the time I am able to do stuff unpredictable, I do not work at the moment, so if competent, can put in time. [16:50] Not sure which part need the attention the most. Just see a lot of stuff that does, but not able to asses the importance of each. [16:57] there are some feature requests on seesaw-kit that are nice-to-haves [16:58] I believe seesaw is python though, which is language out of my scope and would like not to go into that much, but will take a look. [16:59] there is also some work to be done on https://github.com/ArchiveTeam/universal-tracker [16:59] Thinking if it might or not be worth rewriting the stuff. The largest problem I see is disorganisation, rather then lack of features. Which though makes more features more problematic. [17:00] Will look at tracker, redis and ruby is nice. [17:01] I strongly prefer you don't rewrite things [17:01] rewriting is fun for the programmer, horrible for everyone else [17:01] unless you take steps to ensure a compatible interface [17:02] I am aware of that. And compatibility is always my concern. I like stuff working better, but not learning too much new stuff without reason. [17:03] I am going to work very hard [17:03] To find the best nickname for you [17:03] And it will stick. forever [17:04] In a mocking sense? Pfft. [17:05] It will take time though to get familiar, so I do not expect me rushing any rewrite, understanding first is more crucial. Few things I noticed are: 1) Warrior UI eats a lot of CPU, 2) There should be internal limit on B/W speed, so one is able to limit upload, download separately., 3) Pipeline (bot, warr) problems running out of space (ability to s [17:05] plit tasks and merge on server?) [17:06] so like the very first thing I would like to see is an update to the warrior image [17:06] there have been people who have noticed that it is running system components with known vulnerabilities [17:06] Just finding ways to get more work done the best way. Often it boils down to simple problems. While (not good enough); apply 80/20 rule; repeat. [17:06] you are talking about the 20% [17:06] or the 10% [17:06] The 1% [17:07] Should a new image be based on Debian also? [17:07] yes [17:07] ideally, it would be the same system, just refreshed [17:07] apt-get update? :D [17:08] In productive news, the bootleg uploads are going great. Most metadata is getting in nicely, with jpg album covers when available, and full tracks, all derived from .FLACs and the usual bootlegger provenance efforts. https://archive.org/details/buddy_miller_2004-11-07_Nashville_TN [17:08] Yoshimura, full dist upgrade [17:08] apt-get update / apt-get dist-upgrade might get you partway there, there's also packaging and upload [17:08] and then testing to make sure things still work [17:08] and that [17:08] Ok, I try that. Also the UI of the warrior and likely the backend (or just aggregating in browser) [17:09] PurpleSym: They broadcast during daytime, and go to test image during the night [17:09] and then fixing when things dont work [17:09] I don't know who has access to update the IA image we use for distribution; that's something else that needs to be sorted out [17:09] PurpleSym: we're recording it 24/7 though [17:09] And replacing (or having settings) for the console backlog and the graph, to avoid CPU hogs. Maybe storing the backlog in browser, but not rendering [17:09] your nickname is Scope Creep [17:11] No, that's his mantra [17:11] I have to disagree with that one. [17:11] I find it offensive (I find untrue stuff offensive) [17:11] Got it. [17:11] sorry then [17:11] Two Problems. [17:11] *** JesseW has quit IRC (Ping timeout: 370 seconds) [17:12] that said, updating the image would be super-valuable and easily adaptable [17:12] Two Problems. Perfect. [17:12] arkiver: Have you investigated cutting the “empty” parts out? [17:12] I think there was talk of a "timer" [17:12] PurpleSym: no, and I'm not planning on doing that [17:12] PurpleSym: I want to save the full raw stream [17:12] ah ok [17:13] I see. [17:13] If the warrior image is being updated, might as well update the docker image too :)) [17:13] or container, if you're an ass that likes correct terms [17:13] There is a difference between "when stuff is being broadcast, but its blank" and "when nothing is coming out" [17:13] Some editing would be nice imo. [17:13] Kazzy: yeah at some point [17:13] actually, who even has access to that on Docker Hub [17:13] is it Filippo? [17:14] http://regex.info/blog/2006-09-15/247 [17:14] There you go. Two Problems. [17:14] oh it's "archiveteam" [17:14] ok, I need to figure out who that is [17:14] PurpleSym: no, we're not doing to do editing. As with all streams we're recording, this is saved as the raw stream of bits we're getting in. [17:14] Actually recording three streams at the moment [17:14] SketchCow: https://xkcd.com/927/ ... For a production stuff, I am trying to avoid that. For experimentation do not care. So not untrue, but not something I can identify with :P [17:15] European Parliament plenary sessions https://archive.org/details/archiveteam_videobot_European_Parliament_plenary_session_20160414_1003 [17:15] yipdw_: dockerfile on archiveteam/ states maintainer as Filippo too [17:15] Kazzy: ok cool, I'll drop him a line [17:15] And GMU-TV (a university tv stream) https://archive.org/details/archiveteam_videobot_GMU_TV_20160414_0852 [17:15] or you can heh [17:15] Did you just quote xkcd at me [17:15] Adorable [17:15] yipdw_: Yeah, not sure what all I should or not follow or how much I should trim the image down, but as part time sysadmin, I have no trouble, excellent thing for me, the image. Will work on warrior then. [17:15] I'll tell Randall [17:15] I'll try to remember if I catch him online [17:16] I would really really like there to be a way to generate OVAs from Dockerfiles [17:16] SketchCow: Not beacuse you do not know, but to have reference to continue the message, I expected you to know actually :) [17:16] I feel like this is a tool that someone has built somewhere [17:17] that benefit of that in our case is that the dockerfile becomes the canonical reference and we generate the OVA and the Docker image at once [17:17] actually Yoshimura if you know about something like that, or can do that, that might be pretty cool [17:18] it'd be like when alard got WARC support in Wget or when emscripten got massively buff [17:18] archiveteam-initiated, big side benefits everywhere [17:21] .. got Q about archive bot, not sure if I should ask here or not: How much crucial is the Firewall directive? Is IDS and IPS + few ports forward fine, or a problem? [17:22] I don't know what you're referring to; ArchiveBot doesn't do anything with the host firewall [17:22] About dockerfiles, Not sure if to make dockerfiles be able to generate images, or to make a tool that makes images with the dockerfiles. Would both would be fine? [17:23] turn dockerfile into image [17:23] yeah, I think that'd be pretty cool [17:23] yipdw_: I am referring to that the host would have some services running that can use some high ports, else lot of BW and disk space. [17:23] the Warrior dockerfile is based on phusion/baseimage, which is Ubuntu and not Debian, but it seems close enough that we can just test it on a few things and call it good if it works [17:24] In container of course, so the host could be used for archivebot and as docker host at the same time. [17:24] Few days ago I did sign for VM, run out of space immediately. We should figure out a way to prevent Out of space problems. [17:25] back up [17:25] what project are you referring to, the warrior or ArchiveBot [17:25] they're different things [17:25] Kinda both, sorry to be mixing things, I am aware of that. [17:26] they have different concerns re: space [17:26] I did ask about firewall in connection with archivebot, and out of space primarily warrior. [17:26] in the warrior case, running out of space is a problem but it's one that you can really solve with "discard VM image, make bigger" [17:26] if there's a bug in the warrior code that is causing infinite fetch, we fix the bug and try again [17:27] each task in the warrior is easily re-dispatchable, so this solution is really ok [17:27] for the ArchiveBot case it depends on what component is running where [17:27] Yes, but not everyone can do that well, and if it could be solved by uploading instead it would make other stuff more efficient. I have on one place no problem with BW, and other places no problems with storage. [17:28] Warrior architecture assumes that when you upload, you're done [17:29] changing that is possible but it is going to ripple through many other systems [17:29] I think you're extrapolating from one bad experience [17:29] I do not want to say that it is, but it might be worth. [17:29] many, many warrior tasks are small enough that they easily fit in that 40 GB default image [17:30] Actually I am not, and having the limit lowered would be better, or having some kind of ability to at least prefer tasks that should be smaller. [17:32] we currently do not have a way to describe or express preference, and the job distribution mechanism has only rudimentary support for that sort of thing (i.e. "send this task to this nickname") [17:32] in practice, this has not been a problem [17:32] Not re-running the tasks would also mean more stuff able to be done. .. About Archivebot... by running out of space I considered options for getting more. And could share a host, but would like still run some dockers in it, using the unused cpu/bw. [17:33] Well, it should be at least able to determine, I am running out of space (100MB?) left, so lets purge this task, delete data and get another. [17:34] *** schbirid has quit IRC (Quit: Leaving) [17:34] So it can run unsupervised. [17:34] there is, or was, an endpoint to report job failure [17:34] it has not been tested and it is not used [17:34] if you want a mechanism like that, someone (perhaps you) will first have to validate that that tracker endpoint works as advertised [17:35] But you said yourself, delete the data drive. By that you also remove the data, does the task restart? [17:35] If not, it is returned to the pool automatically. [17:36] it does not restart until someone requeues truant tasks [17:36] to have the warrior report a failure is possible, but AFAIK it is not done on either the tracker or warrior end [17:38] that is to say: yes, if a warrior is interrupted, the task is just left out until a project operator hits the "requeue tasks that have been open for more than this long" button [17:38] this ends up working pretty well, as offensive as it might sound [17:38] hence we haven't done an explicit failure flow [17:38] Still solves a problem, Better then have a dead warrior. [17:38] But avoiding a warrior loading the server just to delete the data could be nice. [17:39] I believe then that a mechanism to report a) disk size used by the node, b) unique node ID, c) failed due to out of space, Would be beneficial. And Warrior side check of free space to never outgrow the filesystem, purge the largest task. [17:40] ok [17:56] SketchCow: kpfa is up to 2016-03-31 [17:56] i will be uploading the first 13 days of april soon [18:10] Niiiice [18:12] *** alfie has quit IRC (Read error: Operation timed out) [18:13] *** Stiletto has quit IRC () [18:22] *** alfie has joined #archiveteam-bs [18:28] *** Medowar has quit IRC (Quit: Connection closed for inactivity) [18:37] *** Honno has quit IRC (Read error: Operation timed out) [18:45] *** Stiletto has joined #archiveteam-bs [18:46] *** Stiletto has quit IRC (Client Quit) [18:50] *** VADemon has joined #archiveteam-bs [19:01] *** Stiletto has joined #archiveteam-bs [19:02] *** Stiletto has quit IRC (Client Quit) [19:04] *** vitzli has quit IRC (Read error: Operation timed out) [19:11] *** Stiletto has joined #archiveteam-bs [19:20] *** bwn has quit IRC (Ping timeout: 250 seconds) [19:31] *** Honno has joined #archiveteam-bs [19:50] *** bwn has joined #archiveteam-bs [19:54] *** mismatch has joined #archiveteam-bs [20:02] *** Stiletto has quit IRC () [20:07] I'm trying to rebuild the warrior vm for grins. [20:08] grins — hm, i don't know if I've heard of that linux distribution. :-) [20:08] Have you ever heard of GNU/Shits and Giggles, it's comparable. [20:09] snicker [20:09] But yeah, I'm just basically walking through the steps of the warrior-preseed repo with the latest debian ISO [20:09] great! [20:11] *** Stiletto has joined #archiveteam-bs [20:13] phuzion: So I should save my time then stop? [20:14] Yoshimura: How far along are you? [20:14] Doing more then taking debian and slapping it on. But if you want to do it, feel free. [20:15] IDK if it would be benefit or the other way. You said for grins, so not sure exactly your seriousness. [20:15] You said something about redoing the dockerfile or something, right? If you're working on that, awesome, go ahead and keep going. I'm just playing around here with the vm stuff, more of a learning experience for me, but if it works and can be helpful, I'll be releasing it to the world. [20:16] I know I spoke about this earlier, but can someone please take a look at getting a copy of https://community.virginmedia.com/t5/Email-Cloud-and-webspace/bd-p/email as part of the virgin grabs? I tried archivebotting it, but it woulnt grab anything [20:17] I'd say it's good for you both to do it. Having two people each understand how to do it is useful no matter whether either one ends up used. [20:18] s/you/us [20:18] I mean in sense for AT. Trying to also figure out how and why the megawarc is used. [20:20] Someone told me to SSH in, but ssh seems to be disabled or other port. [20:20] megawarc isn't really relevant to the warrior image [20:21] SketchCow: something might be going wrong here http://fos.textfiles.com/ARCHIVETEAM/ [20:29] Gah, there really should be a requirement that standards older than X years are freely published, if I want to read about something released in 1992 I shouldn't have to pay hundreds of dolars to do so. [20:30] yep [20:30] Yes, I am both strong proponent of copyright as I am of copyleft. Some people seem not not understand. [20:31] One should have time to make money of stuff, while making it free after time. [20:31] Jonimus: Btw, most research papers are Free, but the access to them is not. [20:31] That's why citexseer somewhat helps [20:32] I'm currently looking for some old ANSI/EIA/ISO standards. [20:32] Those should all be free. And by legal frame are not patentable in EU [20:35] and yet I can't get a copy of it for less that $100 and even that is likely a "bootleg" [20:38] *** Stiletto has quit IRC (Read error: Operation timed out) [20:39] *** metalcamp has quit IRC (Ping timeout: 244 seconds) [21:00] *** dashcloud has quit IRC (Quit: No Ping reply in 180 seconds.) [21:02] *** dashcloud has joined #archiveteam-bs [21:08] The official link to IEEE754 (floating point numbers) seems to have a registration/paywall too. That's "great" because I can't even read it for my own educational purposes. Have a good night. [21:08] *** VADemon has quit IRC (Quit: left4dead) [21:14] *** Honno has quit IRC (Read error: Operation timed out) [21:18] *** Stiletto has joined #archiveteam-bs [21:34] Does anyone know tar specifics? Why and how much its padded at end of file? [21:35] tar pads to fill 512 byte blocks [21:36] Nope, there is like a lot more data at the end of whole archive. [21:36] Also for each "record" it pads with two internal blocks, which are, yes, 512. [21:36] how much more data [21:36] *a lot more nulls. [21:38] what tarball are you looking at? [21:38] Not sure exactly, but looks like 8144 bytes. [21:38] Just one created with GNU tar. [21:39] Those 8144 exclude the two block per record padding. [21:39] Yoshimura: look at the blocking factor, it defaults to 20*blocksize [21:41] Thanks! I did want to say it does pad to 10kB. I did not find the multiplier in source so far, thanks for telling! [21:45] i have done the impossible and archived all of kpfa : https://archive.org/details/kpfa-archives-radio-podcast-2016-04-13 [21:46] godane: well done! [21:46] :) [21:48] Yoshimura: yeah, adding the option -b 1 seems to shrink a minimal tarball from 10240 bytes down to 2048. Why are you looking at the innards of tarballs, btw? [21:49] To be able to work with them. [21:50] cool [21:50] Now at the challenge of how UUIDs work, if they could be just changed randomly and what stream optimized vmdk is. [21:51] it's a v4 UUID yes [21:51] +if [21:52] Not sure what that implies, but its vmdk + vbox uuids [21:53] The images seem to have different uuid for the vmdk and the vbox. [21:53] seems reasonable [21:53] And stream optimized is just meant for network. Not sure what is the exact optimization, but if stream is also sparse at the same time, then no problem. [21:54] Just cannot figure out why there is second UUID for VBox if they are of same format. [21:55] the VirtualBox developers would probably know better [21:56] to make images, though, VBoxManage import/export is likely a better interface [21:57] That has no commandline [21:57] yes it does [21:57] Disregard, documentation. [21:57] https://www.virtualbox.org/manual/ch08.html#vboxmanage-import [21:58] seems fine to me [21:58] Should I assume users will be running latest vbox or oldest? [21:59] oh wait [21:59] https://www.packer.io/ [21:59] I did miss that section, but does not say much anyways. If you assume the creator would have vbox itself, then its fine I suppose. [21:59] never mind, we probably already have a tool to do this [22:00] I should have figured Hashicorp would have done it [22:00] Nevermind what? [22:01] I was thinking about image construction from a single source and found a tool that will likely handle it [22:01] Seems fine, but it does use commands from the frameworks themselves? [22:01] Which means to make vmware, you got to have vmware [22:01] I don't know, but if it gets the job done I don't care [22:02] Well, I got the creation pinned down though, at least for vbox so far. If you got the disk images. [22:05] Nevermind, I just noticed ova works with vmware. [22:06] yipdw_: If you want to use that and do not care about me, I can stop. I was looking for less complex, simpler tool for easy (re-)deployment of stuff. [22:07] (It sounded like "Nevermind, I don't care anymore.", so correct me if wrong.) [22:09] packer does seem to fit the bill, exists, and has support, and those are attractive characteristics [22:10] I do not disregard that. But the point was a tool that could do the same not needing vbox and/or other stuff might be nice also. [22:12] http://www.sacbee.com/news/local/article71659992.html [22:12] * Yoshimura just wonders why everyone is "redefining" "experience" and "stuff" ten years after they could have already done it. Wonder if it was climate or just people not seeing future. [22:15] * Yoshimura wishes to find a place in this all. In past, people thought I was nuts, today they think I am whatever, just not visionary, except few. [22:21] yipdw_: Also sorry to say, you kinda failed with what it can do. It is not inline what was discussed. So it would/could only fill the bottom part of that. Which could be pluggable by either the packer or what I was looking at doing. [22:22] k [22:22] So yes, it is absolutely great to have that, while we/I can have another tool to do the bottom, while being able to generate non vbox/qemu/wmware [22:23] we don't really need anything but OVA and docker [22:23] But absolutely thank you for the link, great to know about that. My plan was to make vbox/vmware, and later likely also qemu. [22:23] I don't see where Packer is insufficient, but ok [22:24] If we do not, it would be tool of choice, I do not like dependencies if possible, so single dependency instead packer + vbox might be useful. Including packer as a builder then one can choose which one :) [22:24] Packer takes iso or ova, installs stuff and spits out ova. [22:24] I feel like you're making this too diffuclt [22:25] Nah, I am not. Making another tool is not making anything more difficult, just needing more time to have nicer, alternative way of building it. [22:25] The talk was about using docker to make ova. [22:25] Docker can't make OVAs [22:26] Exactly. [22:26] so if you want to generate a Docker image and something else, you're probably going to want something else [22:26] That's why packer will fit bottom part as one of the options. [22:27] that's all that's necessary [22:28] Maybe, maybe not. [22:28] no, it really is all that's needed [22:28] So what image would be the base for warrior then? [22:28] Or what iso? [22:28] pick Debian, probably [22:28] Too large? [22:28] it works now and would be a good choice going forward [22:29] Also now you have both docker and debian two things, possibly different errors, two things to manage. [22:29] you have a Docker image as an output [22:29] I am now starting to understand why the architecture of the software projects is so complex now. [22:30] Docker itself is not involved in the construction of anything [22:30] Yeah, but docker does not use classic init, in best case. [22:30] wtf [22:30] that doesn't matter [22:30] It does, orphans, and stuff. [22:31] the init system used by a docker image is up to the image [22:31] that's the point of ENTRYPOINT etc [22:31] Yeah, I got it, what you mean. [22:31] Rewrite dockerfile to packer, so it works for both. [22:32] I was going by route, use docker + description of virtual machine to make ova. [22:32] if you're looking for complexity, you're going to find it [22:32] By that you could make most dockers and turn it into virtual machine. [22:32] shouldn't this be maybe in #warrior [22:32] probably [22:32] but I need to get back to something else [22:32] heh [22:33] *** BlueMaxim has joined #archiveteam-bs [22:33] So basically you are bashing me from doing it, as a bullshit. [22:33] Could have been a great tool, but no. Yes, I cannot stand that, it really demotivates me. (Me blames past abuse) [22:34] So yeah, let's use packer. [22:34] I am done. [22:35] no, I really do need to get back to something else, and talking about this ate up more time than I thought it would [23:12] The bottom line is, that to make docker you would still have to do have multiple configs and either iso or existing machine to base it of off. [23:12] Which means still dual work and scripts, instead using one common thing. .. btw should I use i386 still, or can I expect amd64 everywhere? [23:12] Probably bad chan now. [23:26] i'm grabbing the audio pages for kpfa [23:27] looks like some mp3s may have escaped my grab [23:27] i only when after mp3s with 00.mp3 and 30.mp3 [23:32] but then again these maybe only 404 pages for these mp3s [23:32] example: http://archives.kpfa.org/data/20150112-Mon1615.mp3 [23:32] it doesn't exist [23:33] also the 1600 mp3 is 59:51 in length so they shouldn't anything missing in that hour [23:41] *** RichardG has quit IRC (Read error: Connection reset by peer) [23:43] Sounds like Yoshimura is working on some real changes to the Warrior :) [23:43] *** RichardG has joined #archiveteam-bs [23:46] Currently more depressed than that. But while we are at it, why not build everything using host capability, while we are at it, arkiver, Who has the highest word regarding warrior? [23:55] Yoshimura: may I chip in? [23:56] Sure. [23:57] actually, before i throw my opinion at you, throw your goal at me [23:58] It was not my goal even. [23:58] the goal, then :P [23:59] I do not know, because yipdw seems to switch. [23:59] ...