Avisoft Bioacoustics conducted a microphone comparison test focusing on self-noise levels in quiet recordings. Using a soundproofed chamber, they measured the residual noise floor of various popular field recording microphones and recorders, including models from Sennheiser, Sound Devices, Zoom, and others. The results, presented as audio samples and spectrograms, reveal significant differences in noise performance between devices, highlighting the importance of microphone selection for capturing quiet sounds in nature recording and acoustic monitoring applications. The test demonstrates that some seemingly similar microphones exhibit drastically different noise characteristics, emphasizing the value of empirical testing.
Research suggests that poor audio quality during video calls can negatively impact how others perceive us. A study found that "tinny" or distorted audio leads to participants being judged as less competent, less influential, and less likeable, regardless of the actual quality of their contributions. This "zoom bias" stems from our brains associating poor sound with lower status, mirroring how we perceive voices in the natural world. This effect can have significant consequences in professional settings, potentially hindering career advancement and impacting team dynamics.
HN users discuss various aspects of audio quality affecting perceived competence in video calls. Several point out that poor audio makes it harder to understand speech, thus impacting the listener's perception of the speaker's intelligence. Some commenters highlight the class disparity exacerbated by differing audio quality, with those lacking high-end equipment at a disadvantage. Others suggest the issue isn't solely audio, but also includes video quality and internet stability. A few propose solutions, like better noise-cancellation algorithms and emphasizing good meeting etiquette. Finally, some note that pre-recorded, edited content further skews perceptions of "professionalism" compared to the realities of live communication.
The article explores YouTube's audio quality by providing several blind listening tests comparing different formats, including Opus 128 kbps (YouTube Music), AAC 128 kbps (regular YouTube), and original, lossless WAV files. The author concludes that while discerning the difference between lossy and lossless audio on YouTube can be challenging, it is possible, especially with higher-quality headphones and focused listening. Opus generally performs better than AAC, exhibiting fewer compression artifacts. Ultimately, while YouTube's audio quality isn't perfect for audiophiles, it's generally good enough for casual listening, and the average listener likely won't notice significant differences.
HN users largely discuss their own experiences with YouTube's audio quality, generally agreeing it's noticeably compressed but acceptable for casual listening. Some point out the loudness war is a major factor, with dynamic range compression being a bigger culprit than the codec itself. A few users mention preferring specific codecs like Opus, and some suggest using third-party tools to download higher-quality audio. Several commenters highlight the variability of audio quality depending on the uploader, noting that some creators prioritize audio and others don't. Finally, the limitations of perceptual codecs and the tradeoff between quality and bandwidth are discussed.
Summary of Comments ( 2 )
https://news.ycombinator.com/item?id=43602652
HN users discussed the methodology of the Avisoft microphone comparison, pointing out that the self-noise measurements weren't standardized (using different gain settings and potentially different preamps) which makes comparisons difficult. Several commenters wished for more expensive microphones to be included in the test, like the Sennheiser MKH series and Sound Devices recorders. Some questioned the value of the SNR measurements given the uncontrolled variables. Finally, a few users offered alternative methods for comparing microphone noise, such as using a quiet, controlled environment and normalizing the recordings. Overall, the consensus was that while the data is interesting, it's not scientifically rigorous enough for definitive conclusions about microphone performance.
The Hacker News post titled "Microphone Input Noise Comparison – Avisoft Bioacoustics" has generated several comments discussing the linked article's methodology and findings regarding microphone noise levels.
Several commenters focus on the importance of considering the entire recording chain when evaluating noise. One user points out that the noise floor measurements might be dominated by the preamplifier's noise rather than the microphone itself, especially with low-output microphones. They suggest that using higher-gain, lower-noise preamps could significantly alter the results. Expanding on this, another commenter emphasizes that the cable used between the microphone and preamp can also be a significant source of noise, particularly in longer cable runs. They advocate for testing with short, high-quality cables to minimize this factor.
Another key point raised is the specific application of these microphones. While the tests focus on low-level sounds, one commenter notes that for louder sounds, the self-noise of the microphone becomes less critical compared to its maximum sound pressure level (SPL) handling capability. This highlights the importance of choosing a microphone based on the intended recording environment and expected sound intensity. Another user echoes this sentiment, suggesting that for many nature recording applications, the low-frequency rumbling from wind or handling noise is a bigger concern than the minute self-noise levels measured in the tests.
The methodology of the testing also draws some scrutiny. One commenter questions the use of A-weighting in the measurements, arguing that it is designed for human hearing and may not be appropriate for evaluating the full spectrum of noise relevant to bioacoustic recordings, where ultrasonic frequencies can be important. They suggest that a flat frequency response or a different weighting curve might be more suitable.
Finally, some comments delve into specific technical details, like the importance of impedance matching between the microphone and preamp, the role of phantom power, and the challenges of measuring extremely low noise levels accurately. One user mentions the possibility of thermal noise within the microphone itself being a limiting factor.
Overall, the comments offer valuable perspectives on interpreting the microphone noise comparison, emphasizing the importance of considering the entire recording system, the intended application, and the nuances of the measurement techniques. They provide context beyond the raw data presented in the article and highlight the complexities of achieving truly low-noise recordings.