I own a Yamaha vl70-m, it's a synth from the 1990s using physical modeling and a breath controller.
I like this type of synth that allows for more control over time than a simple midi on and off note, also the modeling is good enough to emulate wind instruments in a consistent way but still simple enough for it to stay on the safe lowest bank of the audio uncanny valley.
It can be played directly using a dedicated controller such as the WX11 which is quite convenient if you know the fingerings.
It can also be played using a midi input (keyboard or sequencer) but that's where the "breath control" part becomes tricky...
I've tried several approaches to control it via midi, I've been testing them on the third verse of Nick Perito's C'mon Smile as a benchmark.
The synth respond to breath control cc, I can always set it to a fixed positive value and play, the notes will stop anyway at each note stop.
Here is how it sounds without any variation on the breath value :
Simulating the variations of the breath using automations or a knob controller is possible but can take a long time and natural breath can be difficult to imitate.
The WX11 is plugged on a dedicated port so the synth can receive it and another midi signal at once, so it is possible to play midi notes with anything and blow in the WX11, I've done that a lot.
But there is one big problem though, if I stop blowing and then blow again in the WX11, it will consider that as the start of a new note and default to the note that would be played by the WX11 itself (usually a C# if no key is pressed on the body of the instrument) rather than the last note on the midi note.
So in order to avoid C# to pop now and then I have to make sure to never blow under a certain threshold, or blow exactly at the time a midi note plays.
Maybe there is a solution to solve this configuration but I searched and never found any.
It can send midi messages from breath pressure but also biting pressure, tilting and nodding.
Those parameters can be freely assigned to CCs, pitchbend or aftertouch, it's also possible to design the response curve.
The VL70-m has many funny parameters that can be controlled via CCs so that's a real plus.
Some of these additional parameters are called "scream", "throat", "tonguing", etc they sound like you think they do but I had to do many factory resets after unhappy experimentations with them.
...but it's USB so you need a computer, and it also comes with its load of latency and instability.
For some reason it was working quite well when I simply routed the signal to the midi out but as soon as I try to record anything (even just the audio) the signal becomes unstable and filled with 0-values gaps so I can't record with the same computer that is being used to pass the signal, that's very frustrating.
TEC analog controller
So far TEC analog controller has been my best option.
The response curve is softer than with the WX11, it doesn't have much control not bite or tilt sensors, but that's ok.
Before I bought the analog TEC controller, I tried to build a Max4Live patch that would play the breath part for me (and it could also send messages to all the scream, pressure, noise, etc things).
I tend to think it's impossible to really imitate the way a human would blow mainly because we know the shape of the songs and the length of the notes in advance.
For instance the increasing volume during a sforzando is scaled to the length of the note and that's one more reason why I wish MIDI had also a non-realtime approach where the note offs would be known in advance.
I also still wish the gain increase in the second part of a sforzando would be more common in traditional envelopes but that's another story.
So I made this little M4L patch, it's unfinished and doesn't have any parameter except for the fake ones I used for debug but it is working as it is.
I'll try to describe the behaviour : it makes a short attack for each midi note on (the more silence there was before this note, the louder the attack will be) and slightly longer release at each note off, a long sforzando for each "phrase" (in this context notes close to each other tend to form a phrase) then adds those two, low-pass filters the signal and sends it as a breath CC value.
Here is how the same benchmark melody sounds (I consider it to be better than the flat CC one but worse than the analog BC one) :
By the way that Muzak voicing has become one of my easy arrangement patterns to follow and here is how to do it :
Take a melody and make a close voicing of it, less than an octave.
Take the second note from the top and drop it one octave down.
From that current state, the first a third notes (top to bottom) will be played by brasses (say trumpet and trombone).
The second and fourth notes will be played by saxes (say alto and tenor).
Pan the brasses straight right, the saxes straight left, keep the bass and anything else centered.