One of my colleagues who does data analysis had a CSV (comma-separated values) file that he could not open in Excel nor in any other spreadsheet program he tried.
Either due to its sheer filesize of 125MB, or its number of rows of more than 1,500,000, the programs would gag.
It turned out that for his purposes, he did not need all of the data, only a subset of the columns. Maybe extracting only what he needed into a smaller CSV would enable him to be able to work with it.
I had earlier read parts of the book The Linux Command Line, by William Shotts. I vaguely remembered mention of a utility to selectively pull fields out of a text file. That turned out to be the cut command.
Here's what the first few rows of the original data file looked like.
Only the type and userid columns were actually required. Using the following command I was able to generate another CSV with just those two fields.
$ cut -f 2,4 -d, july.csv > type-userid.csv
The -f option specifies which fields to extract. In this case, its the 2nd and the 4th fields. The -d option is the field delimiting character, which in our case is the comma. It defaults to tabs.
july.csv is the input file, type-userid.csv captures the standard-out.
The first few rows of the resulting file were
And the filesize was reduced to about 24MB, which was usable and much more manageable.
(However, depending on which spreadsheet app you are using, the row count might exceed the limit. For example, both Excel 2007 and LibreOffice 4.2 Calc can handle a maximum of 1,048,576 rows.)
The cut command works blazingly fast. It only took about a second to process the input file. After I handed off the outputted file, I later realized that I might have been able to save my colleague some tedium and waiting time by applying other CLI text-processing commands as well, such as sort, uniq, and wc, depending on what he wanted to do.
The Linux Command Line, by William Shotts
(You can buy the paper book or download the pdf for free)