Most of us do our editing in 16bit color, even though no device can output an image with such precision. Why would one bother with that but not with similarly... unusable... color spaces?
The answer, as I see it, is pretty straightforward.
We use high-bit depth to make the mathematical, color-processing calculations that take place during editing more precise. When we are ready to go to print, the reduction to "usable" precision, is simple rounding. Yes, errors are introduced in the conversion from 16bit to 8bit, but those errors tend to be reasonably small and (more importantly) very consistent.
On the other hand, we use large-gamut spaces to make color-processing more accurate. Unfortunately, to get usable output we must remap colors from the larger gamut into smaller ones that can be reproduced by our output devices. It is this remapping of out-of-gamut colors that can lead to color-shifting "mismatches" which may be relatively pronounced, and which can vary considerably from image to image - depending on the colors present and where they fall within the larger color space.
In my experience, the remapping of out-of-gamut colors introduces far more visible deviation from the original than the reduction of color depth. The more of the (larger) source color space that falls outside the gamut of the (smaller) destination space, the worse the problem gets. This is true even in a high-end, professional workflow (such as my own). Less capable hardware/software can make the problem worse, but it's always there to greater or lesser extents.
Additionally, with AdobeRGB and sRGB all the primaries used are all within the visible color gamut of the human eye. This is not the case with ProPhoto. This difference has two major consequences. It means that: 1) the entire ProPhoto gamut can never be fully covered by any output device, and 2) the conversion from AdobeRGB to sRGB will result in more "direct matches" than will the conversion from ProPhoto to AdobeRGB.
In the end, color-management is as much "art" as it is "science". For some, it is also an integral part of business. For those of us concerned with both color-fidelity and processing efficiency, it makes sense to keep as much of our workflow within the largest gamut capable of being used by our output devices and to avoid (as much as possible) the remapping of out-of-gamut colors (and the manual "tweaking" required to correct any color-shifting this remapping may cause).
For me using 16bit can be very helpful. For example my last photograph that I took and posted here was slightly over exposed. The camrea that I shot it at uses 14 bit and the raw was converted to 16bit lossless Tiff. Anyway the point is that with the huge tonal range using 16bit, I managed to pull back some details in photoshop camera raw that I wouldn't have been able to do with an 8bit raw image because I had more ranges of colours to work with, and with the 16bit it could capture more detail between the different range of colours, even the extremely exposed ones.
That’s actually two completely different things. Bit depth in camera RAW files is not the same as bit depth in an image file, simply because RAW is an input file format, whereas TIFF and JPEG are output file formats. RAW files use 14-bit or 16-bit floating point integers to record numbers that range from a theoretical zero into the hundreds of millions, even billions, since each individual photosite in your camera’s sensor is recording the actual number of photons striking it for the duration that the shutter is open.
Since no output device can render the range of tonal values captured by your camera, it gets converted at the RAW processing stage by assigning a black point and a white point, both of which tend to be well inside the actual recorded tonal range. TIFF files use 8-bit or 16-bit binary integers to encode numbers in an absolute range of between 0 and 256 for 8 bits and between 0 and 65,536 for 16 bits. Both of those ranges have the same absolute white and black point; the 16-bit integers just have more levels between the extremes. That’s why the RAW processing software is able to present more detail at both extremes (albeit more efficiently at the highlight end of things): because that detail is there to begin with, just beyond an arbitrary cut-off point. RAW processing, in effect, re-maps the captured range of tones into something that both your monitor and printer can display.
Bit depth: 16 bit>RAW>8bit Color depth: RAW>ProPhoto>Adobe>sRGB.
Many of us use 16-bit images for editing. Our cameras are only 12 or 14-bit. Monitors are only 8-bit (a few high end are now 10-bit) and many see no difference in printing 8-bit vs 16-bit images. jpegs for web are all 8-bit. So why do we use 16-bit? Less chance of clipping or introducing artifacts in editing.
Some people like ProPhotoRGB as an editing space for similar reasons. Less chance of clipping color channels and fewer hue shifts when using curves. So the reason for using it is not based on whether you can see all of it (just like bit depth). Anyone who uses Lightroom has used a version of ProPhotoRGB whether they realize it or not. However, color spaces are quite a bit trickier to deal with than bit depth. I do a lot of large gamut printing for myself and others and learning to move between color spaces is important for that, but it is something you need to learn if you are using a ProPhoto based work flow. For many it is not worth the effort for what benefits it brings and I definitely see that. I think that using ProPhotoRGB is not nearly as important as using 16-bits in editing, but I have been more satisfied with my printing since moving to a ProPhoto based work flow. The biggest issue is that some software is pretty God awful with ProPhotoRGB. I think many negative experiences with this color space are due to software issues.
I work and store in 16bit, work in AdobeRGB and dump to 8bit JPG in sRGB. I'm not a colour expert in any shape or form ( Heck I only recently started ensuring my monitor is calibrated correctly! ) but I found odd annoyances when I tried using ProPhoto. The usual story I followed some advice from a supposed expert but they only gave half the story and so I got suckered into something I didn't really need because I didn't understand why I was doing it!
I was reading a book the other day that advised sticking with AdobeRGB and dumping to sRGB for JPG, anything else is usually not worth the bother unless you are a) a professional printer and fully understand the what's and whys about the different colour spaces and b) it just makes life easier when you need to exchange with other people and upload to websites.
With regards quality, as I have said before I have some very old images from 1998/99 from my first digital camera and they are now nothing more than novelties because the quality is so poor. I can't print them as they are so rubbish. I don't want that to happen again, hopefully keeping the quality up to highest I can should suffice for all future prints. One of the great things about film, you can pick up a negative from 70 years ago and still get a good print but you can't pick a digital image from basic first-gen family digital camera from 14 years ago and even begin to think about printing it!
"One of the great things about film, you can pick up a negative from 70 years ago and still get a good print"
As long as it's B&W.
In the past, color films (with the notable exception of Kodachrome) were simply awful when it came to permanence. It is possible that newer/future color emulsions have/will overcome this issue, but it will be 70 years before we find out.