Regarding the "Humanizer" function

How to do this, that and the other. Share, learn, teach. How did X do that? How can I sound like Y?
Post Reply New Topic
RELATED
PRODUCTS

Post

AUTO-ADMIN: Non-MP3, WAV, OGG, SoundCloud, YouTube, Vimeo, Twitter and Facebook links in this post have been protected automatically. Once the member reaches 5 posts the links will function as normal.
Hi there! I am on a quest to make a song that sounds like it was played by actual musicians with physical instruments, but using only MIDI. Until some time ago, it was very difficult to fool the general public with a project like that, let alone musicians and producers. With recent developments in technology and education, it seems that things are starting to look a little different. On that trend, some points are worth mentioning, but as I'm here to ask for advice, and if you're willing to help but have no time, you can skip these points, as they're stated here mostly for context.

- Heavy metal bands have relied on virtual drums for decades now, as these provide excellent recording quality, cut expenses on studio hours and allow drummers to execute their intricate patterns and fills flawlessly. That metal heads are a special target of most virtual drum products on the shelf is obvious by the pack names and their sound, really. Needless to say, the light speed of the genre helps with the midi-programming aspect going unnoticed.

- Addictive Drums, for instance, has a pack called Sloppy Grooves, which suggests some producers weren't feeling the human element enough in their traditional midi packs, even if those grooves are played on physical drum pads by actual drummers and recorded straight to MIDI notes, with no quantizing.

- Most DAWs and even many VSTs have evolved from velocity-randomization functions to full-scale "humanizer" systems, with controls that range from "random seeds" to dedicated groove-extraction menus, as well as instrument-specific hair-splitting parameters with math that sometimes defy Stochas's (https://stochas.org/).

- Physical modeling has broken ground after ground in recent years, to an extent that some have questioned if it means the end of sampling (or at least of gigantic sample libraries). After the arrival of Pianoteq years ago, Reason's Objekt has recently pushed the envelope for "ethnic" plucked and pitched percussion instruments: not only are the real instruments it simulates undistinguishable from reality, but it creates instruments that do not exist and yet could quite possibly be crafted out of wood, metal and other physical materials, all following mathematical models of the physical world. Modeling electromechanical pianos has not only been accomplished, some even fooling experienced producers in blind tests, but it has become somewhat of a cookbook recipe for users of Vital. Audio Modeling's SWAM has gained a new string section pack today; while it might be far from fooling attentive ears on live performances, as it is still hard to play and there aren't all that many SWAM players out there, upon proper sequencing and for less complex uses it seems even solo violin is no longer that big of a challenge for PhysMo. IK's MODO Drums still uses samples for crashes and cymbals, and it's clear that physical modeling has its limits: the models themselves, which are created upon decisions taken by limited human developers with limiting human market perspectives. Yet, paired with MODO Bass, their product line pleases a lot of customers who are not aiming at producing hip-hop, UK garage, dubstep or vaporwave, but rather traditional, electroacoustic genres of pop music in all its ramifications. Folks at the University of Hong Kong have dropped a project that could very well reach for the holy grail of virtual instruments: the electric guitar.

- Even within the sample-based realm, gone are the days of stiff sound libraries. Also from China, Ample Sound has proven that an intelligent engine makes up for competitors' large sound banks. MusicLab's RealGuitar, which takes that principle even further, might sound less impressive for some, but their recent Top Guitarist Presets sound eerily similar to their human counterparts. Even developers who ruled the 2000s by sheer library size (and lack of available CPU power, it could be argued), have drifted to making their products more flexible, such as NI's Electric Mint.

- Artificial intelligence lurks with several implications for music and sound design; on the scope of this narrow topic, it has the impressive ability of generating round-robins that could make an 808 sound organic. Sonic Charge's recent Synplant 2 release has stirred heated arguments, but no one can argue its power to make machines sound more human. Even Analog Obsession, albeit fully focused on recreating warmer gear, has its weapon against the dreaded machine gun effect (https://bedroomproducersblog.com/2023/01/13/analog-obsession-realizer/). This technology has become a tool of trade in the physical modeling industry, to the point where Sampleson markets one of its products as being "alive" (https://sampleson.com/reed106.html).

- YouTube has perhaps become the biggest how-to repository in the history of mankind. Nowadays, household names of golden times, patch designers and sound engineers have started their own YouTube channels, by which they explain humbly how they have made "that sound". For the purposes of this subject, musicians and producers are more than willing to share their secret sauces for making virtual instruments and bands sound more human.

Yet, many dismiss these efforts as robotic and argue that they could never come even close to the human feel. That might be true, but it's common knowledge that "disposable" background music has been made via this route extensively, as many small studios and soundtrack composers have cut costs, increased their profit margin and pushed down the prices this way. While it could be argued that music for entertainment is listened to with more attention, Charlie Puth's VST bass will hardly be rejected for being the perfect imitation that it is.

What's more, when one searches for how to make midi music sound real, the same answers are to be found: play it live, never quantize, randomize ticks by 10%, pitch-shift your hi-hats etc. These techniques do help, but under careful scrutiny, few are the ones who seem to get it. Truth be said, it's not hard to fool anyone's ears by playing a live Pianoteq or programming easy pop-punk. But how about more complex genres, such as fusion jazz? How far are we, technology and technique-wise, from the first midi-programmed fusion jazz album that could fool even lab coat producers? Maybe it has already happened, and I'm sure no one can tell for sure what exactly in Squarepusher's early discography is live and what isn't.

In any case, I would like to start this discussion, to try and find like-minded folks who share the same goal or that are just willing to share their knowledge with someone who has just started this journey. Thank you for your time!

Edit: I'm new around here and didn't know YouTube links would be formatted like that! I hope it's no trouble!

Post

So - I've now read a bit. You put the emphasis on “fooling the listener” through
midi constructions. :roll:

I don't think this topic is that important at all. That it's actually not that
significant. Ultimately, it's important that the song itself is really good in terms
of arrangement and performance and that it captivates the listener. And that's
exactly why it doesn't matter whether the song comes about through live
playing or midi construction - or a combination of both.

We (see "enroe" in the footer) mostly play live. But we are increasingly
considering outsourcing individual parts as a midi line. But only if it sounds
just as good and accurate - and still wild and appropriate. :wink:
free mp3s + info: andy-enroe.de songs + weird stuff: enroe.de

Post

I really don't get the fascination in this topic at all. Adequate humanization has existed in sequencers since the 1980s. The fact that many producers don't use it is on them, not the technology. Similarly, it doesn't take AI to have varied sample packs for (say) drums that simulate expressiveness and variance via MIDI velocity. Even the less expensive ones like Steven Slate Drums have lots of different mikings and various samples included.

The areas where there *is* significant recent improvement due to AI (for example, vocal synths) are completely different than humanization to "fool the listener".

Post

enroe wrote: Fri Oct 20, 2023 11:05 am We (see "enroe" in the footer) mostly play live. But we are increasingly
considering outsourcing individual parts as a midi line. But only if it sounds
just as good and accurate - and still wild and appropriate. :wink:
I think most bands do this. If not most then many. We certainly did. I would sequence most of the songs and leave out the synth parts I wanted to play live for me to play while on stage :)

Post

That's a detailed post. I suppose the angle here is that the idea of making something sound more 'human' involves two things, human error and human expression. The error aspect is easy to simulate in certain ways, but the expression aspect is more subtle and perhaps historically inscrutable... but AI has tools to systematise in new ways, at previously unavailable deep levels. And that suggests that there will be a bridge from simple extensions of humanisation in the direction of expression, towards a systematisation of expression at larger scales, i.e. full compositions.

In terms of the virtual instrument market at this moment in time, it seems odd to me that so far no-one has created an AI based drum machine which uses the expressive characteristics of traditional percussion and applies that to synthetic sounds. So far the techniques for simulating this have been quite basic, i.e. filters to vary presence/high end, degree of distortion, prominence of transient. And setting up those parameters to respond well is a PITA, whereas I would imagine AI could do a better job.
Every day takes figuring out all over again how to f#ckin’ live.

Post

The thing is, I am not sure the existing humanizers even need improving on at all. They already do an excellent job IMO.

Post

Humanizers come in two flavors: Randomizers and Grooves.

Randomizing can help a bit but it can't make a real groove - unless sloppy-drunk amateur-hour lazy is the groove :roll:

Groove is better representative of the feel that quality players deliver. Think of how performers like Sinatra, Bill Bruford, and Pino Palladino control time to make us feel a certain way. Reason, like many DAWs, has a system where we can overlay not only the rigid beat division quantize but a MIDI pattern that creates a shaped time. I use this a bit by playing something very rough into the Seq using any old sound til I feel like I have a flow. This will be very wild but I don't care. I assign that MIDI Clip to the groove function with a small amount (seeing a large amount would be super broken :clown: ) and often a bit of slop. Now the whole piece has a unified time flow.

This is of course nowhere near as good as simply phoning the names above to come play things at a level I will never reach.
:-)

Post Reply

Return to “Production Techniques”