I apologize, this gets confusing. Reply and I'll try to clarify the best I can. I'll even submit the example files I've been experimenting with.
In iTunes, I have been converting WAV files into Apple Lossless .m4a files (ALAC). Recently, some of the ones I have converted have had some inconsistent bit depth values. Since I work in Adobe Audition CS6, I can output various formats: format settings of 32/64-bit (Integer OR Floating Point IEEE), sometimes 24-bit Integer, and sample types of 192.000 kHz or 44.100 kHz and up to 32-bit (float).
Contradictory? YES. It sounds weird to have format settings of up to 64-bit Integer/Float, but sample types of various sample rates, and almost always 32-bit (float). Is this a possible reason?
Another problem: iTunes has the ability to convert files to ALAC, among other formats. As it turns out, iTunes has the ability to convert to "32-bit," but only under specific circumstances. Working in Adobe Audition CS6, I have two options for "32-bit:" 32-bit Integer, and 32-bit Floating Point (IEEE). In order for iTunes to be able to maintain that bit depth, I need to select 32-bit INTEGER under Format Settings, and 32-bit under Sample Type. When the file has been exported, I reopen it in Adobe Audition where Bit Depth (in the "Files" box in the top right-hand corner) is listed as "32 (float)" all of a sudden. All I did was export it, I didnt convert it, but despite specifiaclly choosing "Integer," I have 1 32-bit Float audio file. In iTunes, it can open just fine. iTunes calls it "32-bit" as well, and it converts to other formats in iTunes as 32-bit (proabably float) just fine. However, if I export an audio file with 32-bit Floating Point (IEEE) Format Settings from Audition, iTunes fails to convert it to 32-bit, but instead converts it to 16-bit.
Here's what really odd: lately, every single ALAC file opens in Adobe Audition as having a Bit Depth of "32 (float)."
So... Where are the flaws? What are the real bit depths? ALAC files with the same sample rate and different bit depths (according to iTunes) still show up as being "32 (float)" in Adobe Audition. Is iTunes converting them all to 32-bits, and it just doesn't know it?
32 bit Floating Point is Audition's default file operating mode. It offers considerable advantages over any fixed bit depth integer format .wav files.
See your other threads elsewhere on this forum.
Not what I asked, but thanks. Actually, I already knew that, which is why I always work in and save source files in Floating Point (IEEE) formats.
While I'd guess my files are actually 16/24/32-bit files (whichever iTunes claims) due to the varying file sizes, Audition continues to call all of my ALAC files as "32-bit float." If you have any explanations, I'd appreciate them.
Try looking at the advanced properties of the file - it might well tell you exactly what it is. But I'm not surprised about confusion coming from ALAC, as it's only a container for something essentially different, and the properties shown will reflect that, rather than the actual contents. But, if you tell Audition to open all files as 32-bit, then that's what it will say they've been decoded as. Yes, that's a preferences option...