Audio Brewers Beamforming Algorithm - it’s here!
I Spent 18 Months Building a Beamforming Algorithm. Here's What I Achieved.
After a year and a half of deep work, I’m proud to share a project that has pushed my limits technically and creatively: a custom beamforming algorithm for Ambisonics, designed from the ground up and finely tuned for second- to seventh-order audio.
At Audio Brewers, I’ve been obsessed with pushing immersive audio forward - not just following existing methods, but reimagining what’s possible. This algorithm reflects that mindset. It’s fast, clean, extremely directional, and capable of real-time performance with zero added latency.
Solving Beamforming’s Old Trade-Offs
If you’ve worked with Ambisonics, you know beamforming often comes with compromises. You usually have to pick two out of three: focus, cleanliness, and efficiency. Algorithms like MaxRe and Probe offer narrower directivity, but often introduce lobes - unwanted sound coming from the sides or rear.
In first-order Ambisonics, the math just doesn’t give you enough control to win this trade-off completely. That’s why I left first-order alone and leaned on cardioid decoding there, choosing complete rear rejection over narrowness.
But from second order onward, there’s enough spatial resolution to create something better—and that’s where I focused my effort.
A Unique Approach for Each Ambisonic Order
Rather than trying to stretch one solution across multiple orders, I fined-tuned a dedicated approach per order. This allowed each beam to be as narrow and clean as physically possible, while minimising lobes and maximising rear and side rejection.
Across all orders from second to seventh, the results consistently show:
Beams that are much tighter than traditional cardioids
Minimal bleed even when visualised at 90–100 dB ranges
Zero rear leakage, even at low Ambisonic orders
Consistent, frequency-independent performance
Real-Time Beamforming, Zero Latency
One of the design pillars of this algorithm was speed. Many powerful beamformers rely on complex frequency-domain processes, which can introduce significant latency, sometimes more than 1000 samples. Great examples are Zylia’s algorithms, which produce beautifully clean and narrow beams with excellent rear rejection. It’s a fantastic solution that I genuinely admire and respect.
However, in some workflows, especially live spatial tracking, real-time audio manipulation, or interactive environments, even a few milliseconds of delay can become limiting.
That’s why I built my algorithm to be zero-latency capable. If real-time performance is essential, users can use it to explore or track beams instantly. No delay, no buffering - just clean, precise spatial audio right where you need it.
Why This Matters
Creating this algorithm wasn’t just about technical curiosity. It was about unlocking new creative possibilities - letting sound designers, composers, and immersive audio engineers explore Ambisonics with tools that respond immediately and behave predictably, even at extreme resolutions.
Whether you're decoding to second order or pushing the limits of seventh, the goal was always the same: maximize focus, minimize bleed, and give users full control without sacrificing speed or clarity.
A Personal Achievement
This has been one of the most demanding and rewarding technical journeys I’ve taken. For 18 months, I explored, tested, scrapped, rebuilt, and refined. And in the end, I created something that I truly believe pushes beamforming to a new level: a set of Ambisonic beams that are fast, directional, and surgically clean, available from second to seventh order.
I'm excited about what this means for immersive audio, and I'm even more excited to see how others might use it.
If you're working in spatial sound and want to push beyond the usual limitations, this may be exactly what you’ve been looking for.
Cheers,
Alejandro
Audio Brewers