Call to all mathematicians:
Open access to the mathematical literature is an important goal.
Each of us can contribute to that goal by making available
electronically as much of our own work as feasible.
Our recent work is likely already in computer readable form and
should be made available variously in TeX source, dvi, pdf (Adobe
Acrobat), or PostScript form. Publications from the pre-TeX era can be
scanned and/or digitally photographed. Retyping in TeX is not as
unthinkable as first appears.
Our action will have greatly enlarged the reservoir of freely available
primary mathematical material, particularly helping scientists
working without adequate library access.
Recommendation of the Committee on Electronic Information and
Communication of the International Mathematical Union (IMU), endorsed by
the IMU Executive Committee on Mai 15, 2001 in its 68th's session in
Princeton, NJ.
See also
http://www.mathunion.org/IMU_Committees/call_authors.html
Further comments
A fortunate consequence of the above recommendation will be that each
of us will have created his
or her collected works, with opportunity to make editorial comment on
the significance and relevance of our oeuvres.
Help providing advice on scanning papers (and noting the
particular demand of different operating systems and varying
configurations) is available below and other locations (to be inserted).
Advice on adding so-called
metadata tags --- to make it easy to find your papers --- is
also available.
How to scan printed papers and to create Metadata
This is an explanation how I put some of my older papers on the net. I
intend to do this for all of them, since eventually it turned out to be
quite simple. I will continue to report on my experience on this page.
My equipment is PC running under LINUX 2.2, Redhead 7.1, and a 1200dpi
flatbed scanner. I use only freely avaiblable software.
How to scan and create a pdf-file and a ps-file (May 7, 2001):
- I used xsane in an extra directory.
It recognizes the scanner and is menu driven. I used:
301dpi, Lineart (or Black and White), preview, and filename_0.pnm to
filename_9.pnm to store each page - for a 10 page paper.
Each pagefile has 661KB.
- convert filename_?.pnm filename.pdf then created the pdf-file
which had 382KB.
Using acroread filename.pdf I checked the outcome and also printed
into the file filename.ps, 410KB.
- Put filename.pdf and filename.ps to the net.
- See schouten.pdf and
schouten.ps for the result.
How to create a Metadata shadow file:
- To create Metadata, in your browser open the URL
http://MathNet.preprints.org/ or
http://euler.zblmath.fiz-karlsruhe.de/MPRESS/. Here you can search for
preprints which are available on the net.
- In the upper half of this page, pull down until you see
`A tool for creating MetaData is MMM.' Click on `MMM' or directly on
http://www.mathematik.uni-osnabrueck.de/cgi-bin/MMM3.0.cgi.
Click on `Go'.
- You see a form. Fill it out, and use always full URL's. Click on
`Create'. Check it. Click on: `Prepare for download'.
Save the source on your computer, (later) in a subdirectory of your
hompage-directory called: `preprint-shadows'.
- If your preprint shadow-files are not yet harvested automatically, send
email containing the URL of your directory reprint-shadows'
to one of the addresses given on the upper half of
http://MathNet.preprints.org/.
The metadata will then be harvested automatically and can be found via
MPRESS.
- See a sample result
Added May 23, 2001:
- Attention:
The use of convert above does not work if the /temp/ directory is too
small. It requires a lot of memory and induces a heavy load on the computer
while running. Go to lunch during this time. The problems that
convert is optimized for converting single graphic files between
different standards, but not for pulling many pictures into one pdf-file.
With a large /tmp/ -file (7GB) it worked under Linux 2.2.
- Linux 2.4 kills convert quite soon if you try the following procedure.
- If you scanned with 600dpi and get large files, the following procedure
may help:
Use with resolution 603dpi, scanned as rupp-weg_??.tiff. This produced
pagefile of 170KB.
convert rupp-weg_??.tiff rupp-weg.pdf took more than 1 hour and used
all resources of the computer. The resulting file rupp-weg.pdf had
8.2MB -- too large.
for a in *.tiff; do convert $a ${a%.*}.pnm; done
produced files of 2.4MB
convert rupp-weg_??.pnm rupp-weg.pdf
produced a file with 8.2MB -- not advisable
for a in rupp-weg_??.pnm; do pnmscale 0.5 $a > 0.5-$a; done
produced files of 4.8MB!
convert -monochrome -verbose 0.5-rupp-weg_??.pnm rupp-weg.pdf
produced a file of 660KB, see
rupp-weg.pdf. This is usable.
Added May 31:
- 24 pages scanned with 300dpi
648797 May 30 16:42 nat-transf_01.pnm ...
648797 May 30 16:55 nat-transf_24.pnm
- First way of cenversion, with resulting filesizes, works only under
Linux 2.2, is killed by the kernel under Linux 2.4:
convert -monochrome -verbose nat-transf_??.pnm nat-transf3.pdf
I let it run over night. It did not exit properly. Under the new version of
RedHat Linux it is killed immediately by the kernel.
1200421 May 30 17:55 nat-transf3.pdf
printing with acroread gave
1211860 May 31 08:12 nat-transf3.ps
- Second way of conversion, with resulting filesizes:
for x in nat-transf_??.pnm; do pnmtops $x;done
1319530 May 30 17:12 nat-transf_01.pnm.ps
cat nat-transf_??.pnm.ps > nat-transf.ps
extractres nat-transf.ps | includeres > nat-transf2.ps
31668720 May 30 17:14 nat-transf2.ps
ps2pdf nat-transf2.ps
973747 May 30 17:15 nat-transf2.pdf
printing with acroread gave
1243490 May 31 08:11 nat-transf2.ps
Added September 21:
From: Mark Histed
To: Peter.Michor@esi.ac.at
Subject: scanning printed papers
Date: Thu, 20 Sep 2001 16:03:32 -0400
Hi, I've recently done some scanning of journal articles, and found your
page
of instructions. This information might be a helpful addition:
I find that the best results are produced by
* scanning at 150 dpi, 256 level (8-bit) grayscale
* using the following convert command line:
convert -adjoin -geometry 1600x1200 -colors 8 -colorspace yuv ?.png
output1.pdf &
The reason for scanning in more than 2 colors is to provide some semblance
of
anti-aliased text. If you're using convert to dither down the colors, I
find
that using -colorspace yuv produces the best results, much better than the
default.
And yes, convert takes a LOONG time to run, but uses only about 12MB of
memory
while running so doesn't get killed by linux 2.4's out-of-memory handler.
Also, if you're scanning pages with complex figures, you might not want to
dither those pages.
Added October 4, 2001:
From: Mark Histed
Subject: Re: scanning printed papers
Hi Peter,
Ok, I've improved my method a bit.
I scan at 200 dpi, 8bit grayscale, and then do the following:
convert -geometry 1600x1200 -colors 32 -colorspace yuv -adjoin *.tif
output.pdf
The main parameters are the number of colors and the size. 32 colors (6
bits)
and 1600x1200 makes things look nice here, but the files are relatively
large.
The output I got as a result of this is at
http://mozg.mit.edu/histed/9.011/readings/constantine-paton/simon_1992.pdf
Other help-pages: