Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The PS5 has an audio chip loosely based on the design of the PS3’s Cell CPU. It’s used to compute HRTF 3D audio. It’s really cool, but it’s basically the only modern example (not sure what Apple is doing for it’s 3D audio) and of course it’s a console so it’s not a separate, user replaceable sound card.

It’s a shame because I would love more audio sources to support HRTF (head related transfer function) and “ray traced” audio.



Games can have HRTF if the developers want to include it, it doesn't take a fancy sound card to make it happen. Counter-Strike Global Offensive had an update a few years ago to implement HRTF, its now labeled as the "3D Audio" option in the game. It works on pretty much any modern sound card.


+1. It may not he HRTF, but the game with by far the best positional audio at the moment appears to be Crytek's Hunt: Showdown. Sounds can be pin pointed with amazing accuracy. Often times, one can shoot blindly through a wall just based on the noise an opponent makes, and score hits.

The game deliberately includes many sound sources to facilitate this such as stepping on various surfaces, glass shards on the ground, and wildlife making noise based on player proximity.

This works amazingly well on regular, on-board PC sound chips, though headphones are quite mandatory.

(Disclaimer: not affiliated, just a fan).


Thank you. Are you able to adjust the positional audio to “better fit” your ears? The PS5 comes with a few presets but unfortunately it feels like my ears are somewhere between two of the presets, so for one sound sources feel lower than they should while the next the feel higher than they should (compared to a reference sound that is ear-level).


That chip is responsible for much more than HRTFs too. It can handle a huge amount of 3D audio-related DSP effects and decoding which are all way more compute intensive than the HRTF, which is performed once at the very end of the signal chain for the headphones.


How would HRTF be 'performed once at the very end of the signal chain'? Dont you have to transform every individual signal/position before mixing? On the other hand I read somewhere ATMOS is encoded as an array of filters with positions, so decoding is merely a fourier decomposition, would love to learn more about that.


There are a few different models at play: surround sound like 5.1, 7.1, ambisonics and the 7.1.4 Atmos static bed; and object-based where mono point source sounds are attached to a location. The former traditional models can be interpreted as individual objects positioned at the speaker locations and folded down to stereo passing through the HRTF that way. It’s a mixed signal so it really is at the end of the chain. For object-based, those are more precisely located but have other downsides (e.g. they break our mixing concepts for things like compression and reverb) and each object would need to be upmixed to binaural stereo through the HRTF.

Higher order ambisonics strikes a pretty good balance in terms of spatial resolution while still being a mixed signal. You can then pair it with objects for specific highlights. Atmos is a 7.1.4 static bed plus dynamic objects, so similar idea. In either case, most of these 3D sound systems support very few dynamic objects. For example, Windows Sonic only supports 15 dynamic objects on Xbox: https://docs.microsoft.com/en-us/windows/win32/coreaudio/spa...


Thank you. Do you know if Sony has publicly released any more technical documentation about it? I know Sony put out that video with Cerny around the time of the PS5’s release, but I don’t know if there has been anything else.


Nothing public I’m aware of, unfortunately. I wish they talked about it publicly in more technical detail.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: