Any news on Adobe Audition 4?
-
- KVRAF
- 7489 posts since 6 Jul, 2004
I think the confusion (for BONES) is that the spectral view he knew in CE2k was extreemly primitive compared to that which Adobe have implemented over recent updates.
As for using filtering/EQ, koolkeys sums it up with the dog bark illustration, and of couorse the same applies if somebody in the audience coughs during a piano recital, etc. It would be a right pain trying to notch that out with EQ/filtering, but in Audition it takes less than a few seconds to isolate it, draw around it and hit the delete button.
"Easier" does not always equal "Better", but in this case I think it certainly does. It's much the same thing that of course iZotope are shooting towards with their new RX plugin, which costs almost as much as Audition!
As for using filtering/EQ, koolkeys sums it up with the dog bark illustration, and of couorse the same applies if somebody in the audience coughs during a piano recital, etc. It would be a right pain trying to notch that out with EQ/filtering, but in Audition it takes less than a few seconds to isolate it, draw around it and hit the delete button.
"Easier" does not always equal "Better", but in this case I think it certainly does. It's much the same thing that of course iZotope are shooting towards with their new RX plugin, which costs almost as much as Audition!
-
- KVRist
- 246 posts since 7 Apr, 2006 from Adelaide, Australia
Exactly. Now that we're on the same software page these days Andyheadquest wrote:I think the confusion (for BONES) is that the spectral view he knew in CE2k was extreemly primitive compared to that which Adobe have implemented over recent updates.
As for using filtering/EQ, koolkeys sums it up with the dog bark illustration, and of couorse the same applies if somebody in the audience coughs during a piano recital, etc. It would be a right pain trying to notch that out with EQ/filtering, but in Audition it takes less than a few seconds to isolate it, draw around it and hit the delete button.
"Easier" does not always equal "Better", but in this case I think it certainly does. It's much the same thing that of course iZotope are shooting towards with their new RX plugin, which costs almost as much as Audition!
EQing a dog bark in a piece of audio the conventional way would simply filter out everything else around the dog bark arbitrarily. Audition provides specific frequency filtering that just targets the bark and nothing else. The visual aspect just makes it a whole lot easier to isolate.
It's not that you cannot do normal EQing with it, but that you CAN do things in it that you simply cannot with regular plugs in a non spectral way.
-
- KVRAF
- 6111 posts since 18 Oct, 2007
This is exactly what I wanted to say . Bones why don't you just try it and then you'l see what we mean!Ozpeter wrote:The Cool Edit 2000 spectral display allowed you to select vertical slices only, not horizontal.
.. and has way less features but is still the same technology in a nutshell (I've tried it on my boss's system).headquest wrote:It's much the same thing that of course iZotope are shooting towards with their new RX plugin, which costs almost as much as Audition!
-
- KVRist
- 75 posts since 31 Dec, 2004
Sound Forge does not present a multi-track interface. It has a multi-channel interface.pricer wrote:You may want to steer more towards Sound Forge 9 - cheaper (I believe) and essentially the same feature-set, now that they have added multi-tracking (I myself am still on v8, so can't tell you how good it is).
You cannot record "tracks" in Sound Forge like you would in a DAW. What you can do is create multi channel audio files.
The purpose of our multi-channel editing is to create content that has more than 2 channels(stereo). You can edit for surround formats and other multichannel formats.
Peter
-
- KVRAF
- 6111 posts since 18 Oct, 2007
Hi Peter,SonyPCH wrote:The purpose of our multi-channel editing is to create content that has more than 2 channels(stereo). You can edit for surround formats and other multichannel formats.
Peter
This reminds me of an app I've used an on Mac years ago called SoundScuplter. It was simply a multi channel audio tracker (No MTC or MIDI). It was one of my favorite apps for creating music at the time! Is this somewhat the same thing?
-
- KVRist
- Topic Starter
- 69 posts since 13 Sep, 2005
Thanks for all the replies, and I'm glad to see Audition has so many fans. Alongside Cubase it will sit very well for me (so the VST implementation isn't the no.1 concern for me), so I expect I will try to pick it up soon. I'm sure it will make sense to buy it and then upgrade to the next version as several people mentioned special upgrade pricing.
Cheers,
Zam
Cheers,
Zam
-
- KVRAF
- 16153 posts since 2 Dec, 2003 from Nashville, TN
Just curious, but does anyone know how to(or if it's even possible) create loop points for the use in a sampler inside of Audition? So I can set the loop points in Audition, then import the sample into Kontakt and have it use those loop points?
Brent
Brent
My host is better than your host
-
- KVRist
- 254 posts since 23 Apr, 2004
If you create a time selection in Audition's edit view and press the F8 key you create a cue with start/end points, which may - can't promise! - be visible in other apps as a loop (once you save the file in wave format in Audition).
-
- KVRAF
- 6111 posts since 18 Oct, 2007
koolkeys yes this should work. Only create one cue region and turn on 'Loop Mode' for the paricular audio file. You do this (in AA2 by going to View, Wave Properties, Loop Info and then click on Loop. Also when you save, make sure 'Save extra non-audio information' is checked!
The only other question would be whether or not K2 or 3 will read the loop info.
The only other question would be whether or not K2 or 3 will read the loop info.
- GRRRRRRR!
- 15971 posts since 14 Jun, 2001 from Somewhere else, on principle
[quote="SonyPCH"]Sound Forge does not present a multi-track interface. It has a multi-channel interface./quote]
ooh, someone from Sony!
Hey Peter, how can I create a multi-channel file from scratch? i.e. If I want to use a multichannel file to simulate multi-channel recording. I can't see any way with the "New" dialog to create more than a mono or stereo file to record into.
ooh, someone from Sony!
Hey Peter, how can I create a multi-channel file from scratch? i.e. If I want to use a multichannel file to simulate multi-channel recording. I can't see any way with the "New" dialog to create more than a mono or stereo file to record into.
NOVAkILL : Asus RoG Flow Z13, Core i9, 16GB RAM, Win11 | EVO 16 | Studio One | bx_oberhausen, GR-8, JP6K, Union, Hexeract, Olga, TRK-01, SEM, BA-1, Thorn, Prestige, Spire, Legend-HZ, ANA-2, VG Iron 2 | Uno Pro, Rocket.
- GRRRRRRR!
- 15971 posts since 14 Jun, 2001 from Somewhere else, on principle
I really don't see how its easier, and as I said, I don't get what the Spectral view is telling me. I used the example of removing a breath - I can look at a waveform and see the breaths, which makes then easy to remove. OTOH, in Spectral view, everything looks the same to me so isolating something like that would require me to learn how to read the spectral information and get comfortable with it. Maybe after several years of toiling away, things would become easy but right now it seems like most things would be a hell of a lot harder for me.koolkeys wrote:I don't think it has anything to do with understanding something technical or taking the easy way out instead. It's just another way to do something, only in an easier way.
It would actually be really helpful if you could name something. At the moment I see it as something that is mostly opaque.Although I do hold to the belief that you can do things in this view that are near impossible to do without it. But that's just an opinion not based on knowing what everyone uses it for.
See, this just tells me you don't understand the process. The vertical selection is a frequency range, which you can easily isolate using a filter. As I said, depending on what you want to do, you may need to work with a second layer but it is all still possible using more conventionial methods, as far as I can see.Ozpeter wrote:The Cool Edit 2000 spectral display allowed you to select vertical slices only, not horizontal. It has virtually nothing to do with the spectral editing in Audition, which allows you to make a number of irregular selections at once and apply whatever effect you want to them in one pass. Whether it would even be possible another way, it would be a whole lot harder.
Not as such, but EQ is a tool you can apply successfully to raise or lower that vocal in a mix.You cannot isolate a vocal in the centre of a mix with eq.
And exactly how would you achieve that using the Spectral editing tools in Audition? Its not magic, its just a graphical user interface that will suffer from all the same shortcomings as normal filtering. If the frequencies of the two vocals overlap, nothing will help you isolate one without getting a chunk of the other, too. That kind of clean excision is only possible in very specific circumstances and if it works one way it will work in others.And what if you want to isolate the vocal at the centre from the another vocal duetting half-left? Some kind of filter? I don't think so.
Which is a trival thing to fix up using filters, too. I do it all the time when a car goes by or birds start twittering in the middle of a vocal take and you don't hear any background cars or stupid birds in my finished songs.headquest wrote:As for using filtering/EQ, koolkeys sums it up with the dog bark illustration
Actually, that sort of thing is really easy because it is something people have had to deal with for a long time. The process is well understood and any experienced engineer will be able to do it in his/her sleep. I can see where it might be easier than learning to do it by more traditional means but I don't see that it is going to allow you to anything that cannot already be done.and of couorse the same applies if somebody in the audience coughs during a piano recital, etc. It would be a right pain trying to notch that out with EQ/filtering
There you go, you use the correct terminology but you clearly don't understand it. "Filtering" is exactly the same process in both cases. If you are removing specific frequencies over a specific span of time, you will affect the result in exactly the same way, whether you make a lasso selection on a spectral graphic or choose frequency, bandwidth and slope settings in a filter. If Audition is doing it better than filtering, it is probably because it is using better filters under the hood.HERCsMusic wrote:EQing a dog bark in a piece of audio the conventional way would simply filter out everything else around the dog bark arbitrarily. Audition provides specific frequency filtering that just targets the bark and nothing else.
Like what?It's not that you cannot do normal EQing with it, but that you CAN do things in it that you simply cannot with regular plugs in a non spectral way.
NOVAkILL : Asus RoG Flow Z13, Core i9, 16GB RAM, Win11 | EVO 16 | Studio One | bx_oberhausen, GR-8, JP6K, Union, Hexeract, Olga, TRK-01, SEM, BA-1, Thorn, Prestige, Spire, Legend-HZ, ANA-2, VG Iron 2 | Uno Pro, Rocket.
-
- KVRist
- 246 posts since 7 Apr, 2006 from Adelaide, Australia
Bones, either we are not explaining the process to you correctly, or you are not understanding it.
If you have not tried the demo, then all I can say is that you probably should so that you are speaking from actual experience. And if you have tried it but simply don't see the benefits or have not understood how the spectral information works, then there really is nothing to say.
I'm personally not trying to convince you to use it. You can use a wet stick and a bongo for all it matters at the end of the day.
I've used SF and Wavelab as well and all of them have benefits and drawbacks.
How is it different you ask?
Well, fundamentally it's still filtering out specific frequencies. I think the fundamental difference here, that people are trying to articulate, is that the way the spectral information is presented, with a temporal and a frequency axis, allows you to select a specific frequency horizontally, giving you full control over the temporal axis.
That's really something you can't do with conventional filtering. Normally you'd just stack a few filters across the waveform and it would affect the entire spectrum of the waveform, giving you no horizontal or temporal control.
If you don't think that's fundamentally different, then the discussion ends here, because no one is going to be 'right' about it.
In my experience, the spectral information, as it is presented in apps like Audition, Izotope Spectron and others, is really not difficult to decipher. You see frequency and you see time. You still playback the waveform as you normally would, it's just that you are seeing a spectral representation of the waveform. It just gives you a few different options than you'd normally get if you were just working with the whole waveform.
Yes, you can have a look at the peaks in the waveform in detail and try to isolate the cough and if the cough is different enough in the amplitude and frequency domain to be visible to the naked eye, you could filter it by hand or use an EQ. No one is arguing otherwise. Engineers have been doing it for many years. All that the tool in AA does is give you an easier, in certain scenarios, way of doing the same thing.
Heal the cough. Apply a reverb to only certain parts of the spectrum and leave others alone, get rid of one high hat over a drum loop and leave the rest alone. This is all possible. But a 'man convinced against his will, is of the same opinion still'.
So, hey, no sweat.
If you have not tried the demo, then all I can say is that you probably should so that you are speaking from actual experience. And if you have tried it but simply don't see the benefits or have not understood how the spectral information works, then there really is nothing to say.
I'm personally not trying to convince you to use it. You can use a wet stick and a bongo for all it matters at the end of the day.
I've used SF and Wavelab as well and all of them have benefits and drawbacks.
How is it different you ask?
Well, fundamentally it's still filtering out specific frequencies. I think the fundamental difference here, that people are trying to articulate, is that the way the spectral information is presented, with a temporal and a frequency axis, allows you to select a specific frequency horizontally, giving you full control over the temporal axis.
That's really something you can't do with conventional filtering. Normally you'd just stack a few filters across the waveform and it would affect the entire spectrum of the waveform, giving you no horizontal or temporal control.
If you don't think that's fundamentally different, then the discussion ends here, because no one is going to be 'right' about it.
In my experience, the spectral information, as it is presented in apps like Audition, Izotope Spectron and others, is really not difficult to decipher. You see frequency and you see time. You still playback the waveform as you normally would, it's just that you are seeing a spectral representation of the waveform. It just gives you a few different options than you'd normally get if you were just working with the whole waveform.
Yes, you can have a look at the peaks in the waveform in detail and try to isolate the cough and if the cough is different enough in the amplitude and frequency domain to be visible to the naked eye, you could filter it by hand or use an EQ. No one is arguing otherwise. Engineers have been doing it for many years. All that the tool in AA does is give you an easier, in certain scenarios, way of doing the same thing.
Heal the cough. Apply a reverb to only certain parts of the spectrum and leave others alone, get rid of one high hat over a drum loop and leave the rest alone. This is all possible. But a 'man convinced against his will, is of the same opinion still'.
So, hey, no sweat.
-
- KVRian
- 593 posts since 14 Apr, 2005
Ah, the endless using your ears + eyes spectral/visual tools vs using your ears only with traditional tools.
Does anyone know what the differences are between the spectral editor in AA3 and Wavelab6? Pros and cons?
/Johnny
Does anyone know what the differences are between the spectral editor in AA3 and Wavelab6? Pros and cons?
/Johnny
-
- KVRAF
- 16153 posts since 2 Dec, 2003 from Nashville, TN
Thanks, and thanks to Ozpeter as well. I tried that method and it didn't seem to work. Maybe I'm doing something wrong though?Optomadic wrote:koolkeys yes this should work. Only create one cue region and turn on 'Loop Mode' for the paricular audio file. You do this (in AA2 by going to View, Wave Properties, Loop Info and then click on Loop. Also when you save, make sure 'Save extra non-audio information' is checked!
The only other question would be whether or not K2 or 3 will read the loop info.
I did read in the Kontakt manual that it's supposed to recognize the meta data loop points that wave files can carry. So I wonder if I either did something wrong, or if Audition doesn't do this?
Brent
My host is better than your host
-
- KVRAF
- 7489 posts since 6 Jul, 2004
Audition 3 has a spectral Pan display, in which you can also make accurate selections for editing. I think in that view you can in fact do this (but it was ozpeter's illustration so I'll let him answer more fully).BONES wrote:And exactly how would you achieve that using the Spectral editing tools in Audition? Its not magic, its just a graphical user interface that will suffer from all the same shortcomings as normal filtering. If the frequencies of the two vocals overlap, nothing will help you isolate one without getting a chunk of the other, too.And what if you want to isolate the vocal at the centre from the another vocal duetting half-left? Some kind of filter? I don't think so.
To answer drjohnny79, this is an example of how far ahead Audition is compared to Wavelba 6 in this respect. Wavelab 6 I believe has frequency spectrum editing that is about like Audition 1 had or the CE2K editing that BONES is familiar with. In Audition 3 there are many more tools, and you can pretty much treat an audio file like a photoshop picture...
As HERCsMusic says, try this out and you will understand better