Optimizing C++/General optimization techniques/Input/Output

From Wikibooks, open books for an open world
Jump to: navigation, search

Store text files in a compressed format[edit]

Disk have much less bandwidth than processors. By (de)compressing on the fly, the CPU can speed up I/O.

Text files tend to compress well. Be sure to pick a fast compression library, though; zlib/gzip is very fast, bzip2 less so. The Boost Iostreams library contains Gzip filters that can be used to read from a compressed file as if it were a normal file:

namespace io = boost::iostreams;
class GzipInput : public io::filtering_istream {
    io::gzip_decompressor gzip;
    std::ifstream file;
    GzipInput(const char *path)
      : file(path, std::ios_base::in | std::ios_base::binary)

Even if this is not faster than "raw" I/O (e.g. if you have a fast solid state disk), it still saves disk space.

Binary format[edit]

Instead of storing data in text mode, store them in a binary format.

On average, binary numbers occupy less space than formatted numbers, and so it is faster to transfer them from memory to disk or vice versa. Also, if the data is transferred in the same format used by the processor there is no need of costly conversions from text format to binary format or vice versa.

Some disadvantages of using a binary format are that data is not human-readable and that the format may be dependent on the processor architecture.

Open files[edit]

Instead of opening and closing an often needed file every time you access it, open it only the first time you access it, and close it when you are finished using it.

To close and reopen a disk file takes time. Therefore, if you need to access a file often, you can avoid this overhead by opening the file only one time before accessing it, keeping it open by hoisting its handle wrapper to an external scope, and closing it when you are done.

I/O buffers[edit]

Instead of doing many I/O operations on single small or tiny objects, do I/O operations on a 4 KB buffer containing many objects.

Even if the run-time support I/O operations are buffered, the overhead of many I/O functions costs more than copying the objects into a buffer.

Larger buffers do not have a good locality of reference.

Memory-mapped file[edit]

Except in a critical section of a real-time system, if you need to access most parts of a binary file in a non-sequential fashion, instead of accessing it repeatedly with seek operations, or loading it all in an application buffer, use a memory-mapped file, if your operating system provides such feature.

When you have to access most parts of a binary file in a non-sequential fashion, there are two standard alternative techniques:

  • Open the file without reading its contents; and every time a data is demanded, jump to the data position using a file positioning operation (aka seek), and read that data from the file.
  • Allocate a buffer as large as the whole file, open the file, read its contents into the buffer, close the file; and every time a data is demanded, search the buffer for it.

Using a memory-mapped file, with respect to the first technique, every positioning operation is replaced by a simple pointer assignment, and every read operation is replaced by a simple memory-to-memory copy. Even assuming that the data is already in disk cache, both memory-mapped files operations are much faster than the corresponding file operations, as the latter require as many system calls.

With respect to the technique of pre-loading the whole file into a buffer, using a memory-mapped file has the following advantages:

  • When file reading system calls are used, data is usually transferred first into the disk cache and then in the process memory, while using a memory-mapped file the system buffer containing the data loaded from disk is directly accessed, thus saving both a copy operation and the disk cache space. The situation is analogous for output operations.
  • When reading the whole file, the program is stuck for a significant time period, while using a memory-mapped file such time period is scattered through the processing, as long as the file is accessed.
  • If some sessions need only a small part of the file, a memory-mapped file loads only those parts.
  • If several processes have to load in memory the same file, the memory space is allocated for every process, while using a memory-mapped file the operating system keeps in memory a single copy of the data, shared by all the processes.
  • When memory is scarce, the operating system has to write out to the swap disk area even the parts of the buffer that haven't been changed, while the unchanged pages of a memory-mapped file are just discarded.

Yet, usage of memory-mapped files is not appropriate in a critical portion of a real-time system, as access to data has a latency that depends on the fact that the data has already been loaded in system memory or is still only on disk.

The C++ standard does not define a memory mapping interface and in fact the C interfaces differ per platform. The Boost Iostreams library fills the gap by providing a portable, RAII-style interface to the various OS implementations.