Time |
Nickname |
Message |
01:12
๐
|
dashcloud |
so, Monoprice is bringing IPS displays to the masses: http://www.monoprice.com/products/product.asp?c_id=109&cp_id=10909&cs_id=1090901&p_id=9579&seq=1&format=2 |
01:15
๐
|
chronomex |
same monitor on ebay from a thousand koreans for $50 less |
01:16
๐
|
dashcloud |
hmm- buy from Ebay, or buy from reputable vendor with warranty? |
01:16
๐
|
chronomex |
monoprice offers a warranty? |
01:16
๐
|
chronomex |
you can chargeback with ebay too |
01:16
๐
|
dashcloud |
As you've come to expect from Monoprice, we stand behind our products and offer a full 1 year warranty, which is at least 3-4 times what is offered by other monitor manufacturers. Additionally, we are so confident of the quality of these displays that we are guaranteeing these monitor will have less than 5 dead pixels. If you can count 5 dead pixels anywhere on the screen, we'll give you a new one. By comparison, |
01:16
๐
|
dashcloud |
the industry standard, even for industry leaders like Apple and LG, is 10 dead pixels or even more. |
01:16
๐
|
chronomex |
not bad |
01:16
๐
|
chronomex |
one I bought had 0 dead/stuck pixels :P |
02:13
๐
|
godane |
uploaded: https://archive.org/details/bitgamer-archive |
02:47
๐
|
SketchCow |
Moved to archiveteam |
05:13
๐
|
GLaDOS |
starting botfeed for archivist |
05:30
๐
|
GLaDOS |
First test works, time to run the automated ingestor. |
05:31
๐
|
chronomex |
om nom nom |
05:35
๐
|
GLaDOS |
103 books, just for one MegaHAL |
05:39
๐
|
GLaDOS |
13:39:34 up 122 days, 3:45, 1 user, load average: 1.29, 1.10, 0.78 |
05:46
๐
|
godane |
i think we need something in wget so you can download only images form other hosts |
05:47
๐
|
godane |
sort of --accept-regex-host or something |
05:48
๐
|
godane |
this way when you mirror sites that has lot of external images you can do a -H --accept-regex-host='(jpg|jpeg|gif|png)' or something |
05:48
๐
|
godane |
i use underground gamer for example |
05:49
๐
|
godane |
it has tons of images hosted on it but also a ton hosted other websites |
16:45
๐
|
balrog- |
interesting observation/argument coming up ... it seems most of the big "disk preservation" groups aren't interested in a large portion of what's out there |
16:46
๐
|
balrog- |
SPS only wants games; redump and the like mainly focus on games; pretty much no one cares about cracked/"pirated" materials (stuff from the 80s and 90s and such, not current, but current should not be ignored either imho)รย รขยย they only want original |
16:47
๐
|
SketchCow |
Where is it coming up? |
16:48
๐
|
balrog- |
in #messdev and the private mame list |
16:48
๐
|
joepie91 |
balrog-: I'd argue that period of time is especially interesting with regards to cracked versions |
16:48
๐
|
joepie91 |
because of all the demos and such from the various groups |
16:48
๐
|
joepie91 |
cracktros |
16:48
๐
|
joepie91 |
etc |
16:49
๐
|
balrog- |
yes, absolutely. |
16:49
๐
|
joepie91 |
I mean, the only ones I can recall now that still do cracktros |
16:49
๐
|
joepie91 |
are hoodlum and fff |
16:49
๐
|
joepie91 |
and even those, sparingly |
16:49
๐
|
joepie91 |
idk if hoodlum even still exists |
16:49
๐
|
SketchCow |
Spoiler: I've come to not like SPS all that much. |
16:49
๐
|
SketchCow |
I respect the technical effort and the commitment to data acquisition. |
16:50
๐
|
Coderjoe |
yeah.. screw the millions of floppies with people's private data they might be interested in getting back. absolutely no importance there |
16:50
๐
|
balrog- |
they flat out state they only want games. They won't even accept OS releases, and such |
16:50
๐
|
godane |
SketchCow: i got a broadcast copy of the screen savers from 2003.07.14 |
16:50
๐
|
Coderjoe |
(in case it is missed: </sarcasm> ) |
16:50
๐
|
godane |
very good copy |
16:51
๐
|
balrog- |
there are other issues, but I don't want to get into those |
16:54
๐
|
balrog- |
regarding SPS) |
16:54
๐
|
balrog- |
issues with MAME/MESS dumping and such ... I wouldn't mind discussing this more but I know SketchCow is extremely busy. |
16:56
๐
|
Schbirid |
thank god for u-g |
16:57
๐
|
balrog- |
UG is a help, but the problem here is deeper :( |
16:57
๐
|
SketchCow |
I'm happy to discuss it, but yeah, I'm busy in a general sense. |
16:57
๐
|
balrog- |
we have efforts like the dumping union (another private group), but they only care about arcades |
16:57
๐
|
SketchCow |
but it's a discussion worth having, so go ahead. I'm getting a lot done in other windows |
16:57
๐
|
balrog- |
which means people like myself end up shelling hundreds of dollars on various equipment to dump and reverse-engineer |
16:58
๐
|
balrog- |
shelling out* |
16:59
๐
|
balrog- |
I just brought up like 4 different issues |
16:59
๐
|
balrog- |
: |
16:59
๐
|
balrog- |
:/ * |
16:59
๐
|
Schbirid |
SketchCow: fileplanetfileplanetfileplanetfileplanetfileplanet ;P |
17:00
๐
|
Schbirid |
i admire byuu's work for preserving |
17:00
๐
|
balrog- |
this feels like a game of whack-a-mole, or the mythical hydra รขยยรย fix one problem, three others appear. |
17:00
๐
|
balrog- |
there's no way one or three people will be able to solve this |
17:01
๐
|
SketchCow |
Schbirid: Sorry about that - the slowdown is that I need to set aside a chunk of time to make sure the whole thing goes smoothly, because one mistake kills terabytes |
17:02
๐
|
Schbirid |
np, if i nag too much, just say |
17:03
๐
|
balrog- |
yeah I feel the same |
17:04
๐
|
balrog- |
the more annoying thing that I see is the costs of some of this stuff, which we end up paying anyway to preserve it |
17:04
๐
|
Schbirid |
now, imagine a world with a quickly expiring copyright. things would be so much easier |
17:04
๐
|
balrog- |
and museums? I doubt most museums would be willing to do something like this: http://kevtris.org/Projects/votraxml1/ |
17:05
๐
|
balrog- |
(take a look at one of the <color> Board pages) |
17:05
๐
|
Schbirid |
i am sure they would love to, but funding... :( |
17:05
๐
|
SketchCow |
For what it's worth, "there's no way one or three people will be able to solve this" fails to take into account that one of those people might be me. |
17:05
๐
|
balrog- |
SketchCow: yes, yes this is true. I mean people like myself, not like you :) |
17:05
๐
|
balrog- |
Schbirid: no, most museums would not apply heat to an artifact to remove epoxy potting in order to document how it works and repair it. |
17:06
๐
|
Schbirid |
SketchCow: keep your nerves in mind, you cannot save everything |
17:06
๐
|
Schbirid |
balrog-: aye, i spoke too soon |
17:06
๐
|
balrog- |
Schbirid: remember, SketchCow is good at finding other people who are able to help more ;) |
17:06
๐
|
Schbirid |
heh |
17:06
๐
|
balrog- |
and yeah รขยยรย good luck finding a votrax ml-1. not very many were made to begin with |
17:07
๐
|
SketchCow |
http://archive.org/details/dragon_magazine |
17:07
๐
|
SketchCow |
Let's see who screams |
17:08
๐
|
balrog- |
I think it would go a long way if something like dumping union could be organized for non-arcade hardware. Things get more tricky, because while many arcade board types are well understood, many computers need poking and probing |
17:09
๐
|
DFJustin |
I've said it before but my main beef with sps is lack of transparency |
17:09
๐
|
balrog- |
lately I've been doing research/reverse-engineering of early digital synth hardware, and figuring out secret modes and "tricks" to dump various early protected chips. |
17:09
๐
|
balrog- |
DFJustin: I'm not even talking about SPS in particular here. |
17:09
๐
|
balrog- |
I don't like SPS, but that's besides the point. |
17:10
๐
|
balrog- |
if all we do is talk about how we don't like SPS, we are missing the bigger picture |
17:11
๐
|
SketchCow |
http://archive.org/details/magazine_rack - watch that space |
17:11
๐
|
SketchCow |
it's about to get fucking huge |
17:12
๐
|
balrog- |
:) |
17:13
๐
|
SketchCow |
http://www.crackajack.de/2013/01/09/vintage-man-machine-interface/ |
17:15
๐
|
balrog- |
stuff like that .... obtaining one, figuring out how it works, and writing decent emulation is not all that easy |
17:18
๐
|
balrog- |
I suppose everyone here knows that though |
17:20
๐
|
SketchCow |
Last year, the first year I was working for Internet Archive, I was focused on several things. Among them was easier scripting to ingest massive amounts of data into the archive. For that I was rather successful - even outside of archive team specific chicanery, I pulled in something like 100 terabytes of data. |
17:20
๐
|
balrog- |
that's impressive... and quit important. |
17:20
๐
|
balrog- |
quite* |
17:20
๐
|
SketchCow |
And I found that it's getting very easy, not 100%, but much easier to absorb most of the folksonomic scans and digitizations people have done over the last decade or so. |
17:21
๐
|
SketchCow |
So that is ongoing. In this week's work, the integration of bitsavers will bring 25,000 computer documents into the world in an easy to browse fashion. |
17:21
๐
|
balrog- |
bitsavers? nice. Be warned only the newer stuff there is OCRed... and as you probably know already, new stuff keeps getting added. |
17:21
๐
|
balrog- |
there's also bitsavers/bits |
17:21
๐
|
SketchCow |
Tell me more |
17:21
๐
|
SketchCow |
Also, they often feature computers |
17:22
๐
|
SketchCow |
The functionality of what I've created is AUTOMATIC ingestion. |
17:22
๐
|
SketchCow |
It'll just run with each new addition of material. |
17:22
๐
|
balrog- |
oh, cute: they have the code for XINU, explained in "Comer, Douglas E., Operating System Design: The Xinu Approach, Prentice-Hall" รขยยรย I have that book |
17:22
๐
|
SketchCow |
It's a similar approach to http://archive.org/details/dnalounge |
17:23
๐
|
SketchCow |
That has no human intervention. |
17:23
๐
|
DFJustin |
IA OCRs and adds a text layer to anything that doesn't already have it |
17:23
๐
|
balrog- |
and with good enough accuracy, right? |
17:23
๐
|
balrog- |
fulltext OCR search is quite nice to have, in addition to the metadata |
17:23
๐
|
SketchCow |
OCR at archive is shit. |
17:23
๐
|
SketchCow |
See, you can't do this, balro. |
17:23
๐
|
SketchCow |
This is how projects fail. |
17:23
๐
|
balrog- |
I've found some pretty useful stuff because Google OCRs PDFs as they index them. |
17:24
๐
|
balrog- |
yes, the OCR is not great, but in many cases it's good enough |
17:24
๐
|
SketchCow |
You get the foundations in the most non-intrusive decision making possible, and THEN you go "what about the curtains? do we have peonies or sunflowers in the front yard?" |
17:24
๐
|
SketchCow |
If you oscillate between "oh god, floppies are dying and SPS doesn't care" and "but what about the OCR accuracy", that's how you don't get stuff done. |
17:25
๐
|
SketchCow |
You get paralyzed. |
17:25
๐
|
SketchCow |
Move in waves. |
17:25
๐
|
balrog- |
I'm not asking about OCR accuracy. I'm asking about OCR indexing |
17:25
๐
|
SketchCow |
You're asking about something above "get it all online" |
17:25
๐
|
balrog- |
even google's crappy OCR is useful because it goes in an index that can be fulltext searched. |
17:25
๐
|
SketchCow |
Which is the first problem. |
17:26
๐
|
DFJustin |
this may be a good time to mention I just got ham radio magazine 1985-1986 from emule, do we have that already |
17:26
๐
|
SketchCow |
yes, it was made dark. |
17:27
๐
|
joepie91 |
balrog-: I think what SketchCow is trying to explain, is that if you start trying to create additional functionality at this point, you are leaving the problem before it partially unsolved (getting everything available in the first place) |
17:27
๐
|
balrog- |
yes, that is true |
17:27
๐
|
joepie91 |
and losing focus and manpower to solve that problem |
17:27
๐
|
SketchCow |
Right now, in my house, I have negotiated and I have, a $25,000 Scribe digitizer from Internet Archive. |
17:27
๐
|
SketchCow |
It's in the other room, I've been setting it up. |
17:28
๐
|
SketchCow |
My official name in their system is Internet Archive Poughkeepsie |
17:28
๐
|
SketchCow |
You realize what this means. |
17:28
๐
|
SketchCow |
Pro-level digitization is now not subject to justification for computer documents. |
17:28
๐
|
SketchCow |
there's a place where volunteers can scan manuals and other items. |
17:28
๐
|
SketchCow |
It's right here. |
17:29
๐
|
balrog- |
but you have to be very careful when getting everything, to not miss important things. this is more of an issue with hardware than software. Plenty of arcade boards and other boards had ROMs mis-dumped or certain ones not dumped at all in the past, and with rare prototypes, collectors are rarely willing to allow anyone to touch them once they have them. |
17:29
๐
|
balrog- |
so if I have time, I can drag my paper documents there and scan them? |
17:29
๐
|
SketchCow |
Yes. |
17:29
๐
|
balrog- |
manuals, schematics, etc |
17:29
๐
|
balrog- |
the Scribe is designed for bound books, correct? |
17:30
๐
|
SketchCow |
Books that open with a side bound, yes |
17:30
๐
|
balrog- |
but for stuff that can easily be unbound or that I'm willing to unbind, I'm better off doing so and sheetfeeding...? |
17:31
๐
|
balrog- |
another question I've had for a while, and this is not specifically for SketchCow: has anyone done work on postprocessing color scans? |
17:32
๐
|
joepie91 |
balrog-: postprocessing in what sense? |
17:33
๐
|
joepie91 |
I've been scanning a few comics and running them through scan tailor, which went fine |
17:33
๐
|
joepie91 |
needed some manual adjustments, but otherwise it was great |
17:33
๐
|
balrog- |
taking the multi-gb scans and compressing them down into something that doesn't take so much space yet has sufficient quality |
17:33
๐
|
DFJustin |
btw the folkscanomy collection should be linked off http://archive.org/details/additional_collections so people can find it |
17:33
๐
|
joepie91 |
hmmm |
17:33
๐
|
balrog- |
bilevel you can use G4 Fax compression which is great |
17:33
๐
|
joepie91 |
what format re the scans in? |
17:33
๐
|
joepie91 |
are * |
17:33
๐
|
balrog- |
usually uncompressed or lzw or zip tiff |
17:33
๐
|
joepie91 |
because I'd imagine this would be something for imagemagick or similar |
17:33
๐
|
balrog- |
some form of tiff, basically |
17:34
๐
|
balrog- |
yeah I have been using imagemagick but color is just a pain |
17:34
๐
|
mistym |
For bilevel PDF I usually prefer jbig2; it's typically much much smaller, even lossless! |
17:34
๐
|
joepie91 |
how come..? |
17:34
๐
|
balrog- |
does pdf support jbig2? |
17:34
๐
|
balrog- |
huh it does. |
17:34
๐
|
mistym |
Yeah, it has for a few versions back. Most readers now support it too. |
17:35
๐
|
joepie91 |
also, balrog-... for lossless color scans you're probably looking at something like PNG... for lossy, I'm not sure |
17:35
๐
|
joepie91 |
but lossless scans will be HUGE still, probably |
17:35
๐
|
balrog- |
joepie91: tiff+zip and png are rather identical |
17:35
๐
|
joepie91 |
really? |
17:35
๐
|
balrog- |
yes, because png uses deflate |
17:35
๐
|
joepie91 |
weird, I thought I saw different results |
17:35
๐
|
balrog- |
tiff+zip isn't tiff that's zipped, it's tiff with deflate |
17:36
๐
|
mistym |
For some reason reminds me, TIFF-LZW for 16 bit per channel scans has hilarious results. |
17:36
๐
|
balrog- |
tiff-lzw is ... weird |
17:36
๐
|
balrog- |
joepie91: what tool do you use to create jbig2 pdfs? |
17:36
๐
|
balrog- |
is there a pdf2pdf that can recompress pdfs to jbig2? |
17:36
๐
|
joepie91 |
you sure you wanted to address me and not mistym? :P |
17:36
๐
|
joepie91 |
as I don't think I ever used jbig2 |
17:36
๐
|
balrog- |
err, mistym ^^ |
17:36
๐
|
balrog- |
:D |
17:36
๐
|
joepie91 |
heh |
17:37
๐
|
joepie91 |
also, this may come in handy |
17:37
๐
|
joepie91 |
tiff2pdf WILL fuck up your PDFs when JPG compression is used |
17:37
๐
|
joepie91 |
I have a fixing script for that |
17:37
๐
|
DFJustin |
for colour just use a high quality jpg, yes the file size will be big but who cares, better than having to rescan everything later when everyone has PB hard drives because the quality was shit |
17:37
๐
|
joepie91 |
balrog-: http://git.cryto.net/cgit/scantools/tree/fix-pdf |
17:38
๐
|
balrog- |
can you get tiff2pdf patched maybe? |
17:38
๐
|
DFJustin |
or even better jp2 |
17:38
๐
|
balrog- |
(upstream) |
17:38
๐
|
joepie91 |
if you use tiff2pdf and it results in inverted JPGs in your PDF, then run that |
17:38
๐
|
balrog- |
joepie91: I get inverted tiffs in my pdf with tiff2pdf |
17:38
๐
|
joepie91 |
it will comfortably handle multi-GB PDFs with very little memory, because it does chunked reads and writes |
17:38
๐
|
mistym |
balrog-: You could probably script it pretty easily for PDFs containing a single image layer by extracting to a sequence of TIFFs, then throwing that into jbig2enc, then tossing the results back into a PDF. |
17:38
๐
|
joepie91 |
also I have no idea how to fix it upstream |
17:38
๐
|
joepie91 |
but it's a known bug |
17:38
๐
|
balrog- |
been having to use tiffcrop -I to correct :( |
17:39
๐
|
balrog- |
mistym: yeah, but what tool to reassemble? |
17:39
๐
|
joepie91 |
anyway, balrog-, run your PDFs to that script and they will magically be fixed |
17:39
๐
|
balrog- |
joepie91: ok |
17:39
๐
|
joepie91 |
had to write it to fix up my comic scans |
17:40
๐
|
balrog- |
oh so that replaces ColorTransform 0 with ColorTransform 1 in some places? |
17:40
๐
|
mistym |
balrog-: jbig2enc includes a pdf.py script that turns the raw jbig2 into PDFs. If you do it on a page-by-page basis, it's then not too hard to combine the individual pages back into one PDF. Or (if you don't mind increasing the system requirements for reading) you can do the whole jbig2 compression using a single dictionary across multiple pages. |
17:40
๐
|
joepie91 |
balrog-: yes, but it does a chunked search and replace |
17:40
๐
|
joepie91 |
so it reads in small chunks |
17:40
๐
|
joepie91 |
so it doesn't have to load the entire PDF into RAM at once |
17:40
๐
|
balrog- |
ah, yeah. |
17:40
๐
|
joepie91 |
and even handles edge cases |
17:41
๐
|
joepie91 |
where the search string is over the edge of two chunks |
17:41
๐
|
balrog- |
ugh, all this needs to be on a wiki |
17:41
๐
|
joepie91 |
and *even* handles false positives immediately followed by a match :P |
17:41
๐
|
mistym |
Yeah, I should write this down... |
17:41
๐
|
joepie91 |
so all edge cases should be covered |
18:04
๐
|
balrog- |
wow, jbig2... |
18:04
๐
|
balrog- |
mistym: slow to view though :p |
18:05
๐
|
mistym |
balrog-: How slow it is depends on how big a dictionary you created :b One dictionary per page is pretty speedy. One dictionary per 100 pages (or more) is slow. |
18:05
๐
|
SketchCow |
Just went past 10,000 documents on http://archive.org/search.php?query=collection%3Abitsavers&sort=-publicdate |
18:06
๐
|
SketchCow |
Also, IT BEGINS |
18:06
๐
|
SketchCow |
http://archive.org/search.php?query=collection%3Amagazine_rack&sort=-publicdate |
18:07
๐
|
balrog- |
mistym: I used the method in the readme: $ jbig2 -s -p -v *.jpg && pdf.py output >out.pdf |
18:07
๐
|
balrog- |
created many .NNNN files and one .sym file |
18:08
๐
|
mistym |
That'd be one dictionary for the entire document, then. More efficient compression-wise, but the bigger the dictionary the slower decoding will be. |
18:08
๐
|
balrog- |
yeah, and for a 386-page document... |
18:08
๐
|
balrog- |
how do you do one dict per page or one per 25 pages? |
18:08
๐
|
SketchCow |
http://archive.org/details/texwiller_magazine (622 issues) |
18:09
๐
|
balrog- |
:/ |
18:09
๐
|
balrog- |
oh btw, you might want to make it clearer what it means when books only appear as encrypted DAISY. A friend of mine was confused by that. |
18:10
๐
|
SketchCow |
What do you mean, what it means. |
18:13
๐
|
DFJustin |
some kind of auto-generated blurb saying "This book has been scanned by IA but is still under copyright so it is not available to read unless you have perceptual disabilities and have registered with such-and-such US government program" would be nice |
18:13
๐
|
balrog- |
yes, that. |
18:14
๐
|
balrog- |
or I get messages from friends as follows: http://pastebin.com/zFk1xrWG |
18:15
๐
|
DFJustin |
the musicbrainz cover art collection is also confusing people, understandable when you look at the page title that shows up on google https://archive.org/details/mbid-8a51ac29-77a4-4d25-9f75-8efcc25b0c33 |
18:16
๐
|
balrog- |
that's less of an issues, since it says "Cover Art Collection" |
18:16
๐
|
balrog- |
I'm not looking to appeal to the lowest common denominator of people |
18:16
๐
|
DFJustin |
yeah but it's basically spamming google with thousands of album titles + "free download & streaming" |
18:16
๐
|
balrog- |
err, Cover Art Archive rather |
18:17
๐
|
DFJustin |
but yeah should be obvious once you arrive |
18:17
๐
|
balrog- |
the DAISY thing is not so obvious, and therein lies the issue |
18:25
๐
|
Coderjoe |
people are stupid |
18:26
๐
|
balrog- |
yes, but we don't want to cater to each and and every stupid person |
18:26
๐
|
SketchCow |
Coderjoe: SPOILER ALERT |
18:27
๐
|
Coderjoe |
balrog-: at what point did I say I wanted to? |
18:27
๐
|
balrog- |
no, I'm just saying it's not worth it. |
18:27
๐
|
balrog- |
mistym: answer? :) |
18:28
๐
|
balrog- |
I'd like to know how to do this without making a pdf that crashes most viewers, even the most efficient |
18:28
๐
|
mistym |
balrog-: Sorry, lost track of this. |
18:28
๐
|
balrog- |
it's ok ;) |
18:31
๐
|
mistym |
Anyway: rather than one invocation of jbig2, either do one per page, or slice your set of images into groups of however many. |
18:31
๐
|
mistym |
jbig2enc will use one dictionary for all of the input files you give it. |
18:31
๐
|
balrog- |
will pdf.py be able to assemble multiple sets? |
18:32
๐
|
balrog- |
or do I have to then merge pdfs? |
18:33
๐
|
mistym |
You'll need to merge them. I *think* pdfbeads may have a feature to do this for you; let me check. |
18:34
๐
|
balrog- |
I don't like pdfbeads because it breaks things into lossy backgrounds anyway :/ |
18:34
๐
|
balrog- |
pdftk apparently can |
18:34
๐
|
balrog- |
and pdfunite |
18:36
๐
|
mistym |
Didn't realize pdfbeads always forced lossy backgrounds. That sucks. |
18:36
๐
|
mistym |
It does provide a --pages-per-dict option though, which is what I meant. |
18:36
๐
|
balrog- |
ahh. |
18:37
๐
|
balrog- |
and yeah, lossy backgrounds does suck |
18:37
๐
|
mistym |
It does that even if you only provide it one layer? |
18:37
๐
|
balrog- |
maybe that's not mandatory but I don't want to split into backgrounds *at all* |
18:38
๐
|
mistym |
What files are you feeding into it? |
18:38
๐
|
balrog- |
into what, pdfbeads? |
18:38
๐
|
balrog- |
it's been a few months since I've used it |
18:39
๐
|
mistym |
Yeah, I was just wondering about your input files. I haven't peeked at the code, but the help text implies that it doesn't always attempt to split into multiple layers. If the input file is already binarized, maybe it produces only a single layer? |
18:39
๐
|
balrog- |
they're single-layer tiffs |
18:48
๐
|
mistym |
Anyway, I guess joining the multiple PDFs later with another tool is probably just as easy. |
18:48
๐
|
balrog- |
yeah. ok |
18:50
๐
|
mistym |
I tried feeding already bitonalized data into pdfbeads and it seems to be hanging forever, so boo to it. |
21:05
๐
|
godane |
http://www.staples.com/VuPoint-Magic-Wand-Portable-Scanner-Black/product_900544 |
21:06
๐
|
godane |
i plan on buying that on ebay |
21:06
๐
|
godane |
i get a used one for like less then $10 |
21:10
๐
|
godane |
i bid on this: http://www.ebay.com/itm/VuPoint-Magic-Wand-PDS-ST415-VP-Handheld-Scanner-/330855555020?pt=US_Scanners&hash=item4d08871fcc&autorefresh=true |
21:11
๐
|
godane |
same one thats on staples but for %90 off |
21:11
๐
|
Schbirid |
they might suck badly |
21:12
๐
|
godane |
we will see |
21:12
๐
|
godane |
i want to scan the pages and not have crappy flip cam snapshots |
21:20
๐
|
godane |
Schbirid: have you use one of those scanners before? |
21:20
๐
|
Schbirid |
nope |
21:21
๐
|
godane |
ok |
21:21
๐
|
godane |
was hoping for a sample scan |
21:21
๐
|
Schbirid |
i only have a REALLY crappy normal scanner |
21:23
๐
|
DFJustin |
I know they used to suck but I haven't even seen one since the mid 90s |
21:23
๐
|
DFJustin |
so who knows |
21:23
๐
|
DFJustin |
used flatbeds are mad cheap though so I'm not sure what the point is |
21:26
๐
|
Schbirid |
easier to scan books i think |
21:26
๐
|
Schbirid |
you do not have to stress the back binding |
21:28
๐
|
Schbirid |
nighty |
21:38
๐
|
SketchCow |
http://archive.org/details/magazine_rack |
21:44
๐
|
DFJustin |
missing an L on "e Scienze is an Italian science magazine" |
21:46
๐
|
SketchCow |
Fixed |
23:40
๐
|
dashcloud |
This is the kind of crazy I can get behind: http://www.wired.com/threatlevel/2013/01/corporation-carpool-flap/ Guy rides in the carpool lane with his papers of incorporation- the paper is the corp, and the corp is a person, so he's got two people in the car |