Streams or Iterators?
When I updated my LZW reference code to use the latest C++ features, I abstracted my input and output functions using templates. Data was read and written using the iostreams paradigm, which requires simple classes that implement just a few functions. Would I have been better off using the iterator paradigm instead? The C++ algorithms library favors that method of processing data, and it can be both elegant and powerful. Which of the two paradigms is the right one for data compression?
General purpose data compression routines tend to be used on binary streams of data, either from files or in-memory objects. So what is the best general paradigm for input and output when compressing data?
You might analyze this problem by imagining that you need to write a binary copy routine.
This routine is particularly nice when you are performing a simple copy using pointers to memory - the generated code should be really efficient.
However, the iterator paradigm doesn’t work quite as well when you want to perform a binary copy of data in a file. I can make use of iterators that almost do the job:
But the bad news is that both
the insertion and extraction operators, which are really meant for whitespace-delimited textual
data, not binary data. The copy routine shown here will not make a binary byte-for-byte copy of
the input file.
So when using files, the stream approach seems to be the way to go:
If my files have been opened using the
iostream classes, you can use this binary
copy function without having to write any glue code - they already support the
put methods, so this works right out of the box.
If I’ve made up my mind that my data compression routine is going to use one of these two
paradigms, it means I am going to have to write some glue code. If I choose the iterator-based
approach, I need the equivalent of
for binary files - and these aren’t in the standard library. If I choose the stream-based
approach, I need efficient
get() members for blocks of memory.
In some cases
basic_stringstream might do the job, but not in all cases.
After dithering around with various solutions, I tentatively opted for the stream paradigm. I found the implementation for various sources of data to be fairly simple, and the interface is easy to understand. I don’t know if it’s the perfect choice, and I’ll keep experimenting, but for now it works for me. My abstraction of the LZW code still needs a lot of work, so it’s always possible I could rethink this at a later date.
I’d like to hear your thoughts - is there an obvious right answer to this question?