One Synth Challenge #108: Diodow by HrastProgrammer (Schiing Wins!)

VST, AU, AAX, CLAP, etc. Plugin Virtual Instruments Discussion
Post Reply New Topic
RELATED
PRODUCTS

Post

TheNeverScene wrote: So far, my only 1's are me and Taron.....
Haha.... just got that one :hihi: :clap:

Post

Frostline wrote: And while you might not intend a score of 2 to equate to X such as my personal scale does, without implicit knowledge of what your 'relative' scale is based on how is one to know?
Agreed. No way to know. All I ask is please do not assume or even insist that it means anything - unless the voter has explained their system.
Frostline wrote: "Loads of good tracks", does not equate to 'all are good tracks' I assume? Or does a track merely existing qualify as 'good'? Or if a track is not 'good' it must be terrible?
Well, the way I sometimes do it and would like to do it more, then the annoying answer is 'it depends' :)
Some months - imo - there are no terrible tracks. Some months there are.
If a track is not 'good' it must be terrible? Not at all for me - I am saying the opposite of that. That is my point.
For me 5 is better than 4 is better than 3 is better than 2 is better than 1. That is the only consistent fact.
1 might be terrible. 2 might be terrible (imho of course).
But 1 might be OK. Or even quite good. It all depends on the entries that month. The scoring does not mean anything, absolutely, it is just a ranking relative to the other tracks in my evaluation.
But if someone insists that if I vote 1 or 2 it means I am saying it is terrible - I feel like I'd better not use those scores if I don't think it is terrible - even though I would like to use those scores to have a greater relative range.
Frostline wrote: So while I may now know that a score of 2 from Zarf may mean my track was still 'good' to their standard, what about the other 19+ 'relative' standards?
Right. You don't know. Unless we tell each other how we are scoring, or have an agreed standard, then we don't know. That's my point. Please don't anyone assume that you do know :)
Frostline wrote: I completely empathize with the desire to better distinguish degrees of 'good' in the voting. A scale I would like would be something like 1,2,3-,3,3+,4-,4,4+,5-,5,5+. And I actually structure my votes like that on the voting page.
Or 1-10, more or less :D (or 1-11 in your example. Maybe TNS is right!! Everything to 11)
Frostline wrote: I sincerely hope you do not view this as an attack on your position or on you personally. I too find this interesting to discuss and am in no way trying to imply your 'relative' scale is 'wrong' or improper or anything of the sort. :hug:
Not at all my friend, happy to talk about it, this is all good :hug:

Post

Thanks to everyone's thoughtful answers to my question about voting in regards to voting - (how much I like the music vs. appreciation of straight skill.)

@zarf - I like your balanced approach... I think I'm leaning toward going with that and weighing "how much I like it" a little less heavily.

@Taron/dB - Yes I agree it's kind of personal and I appreciate that your encouragement to balance what I "think" is good but to also go with my gut a little bit. Also dB, what you say about knowing so little about the individual trials and tribulations that went into each entry from only hearing their results: THIS IS SO TRUE, and why I take voting seriously enough to ask about it!

@Peter:H Thank you for the wisdom of Bob Ross! :D

@schiing - thanks for your straightforward answer... that's why I asked, and I definitely don't take your useful reply as a secret plea for a 4 or a 5, haha! (ps. I would never vote that amazing vape track of yours lower than a 4! ...but yes your track was one of SEVERAL this month that really got me thinking harder about how to divvy up my 4's and 5's)

@TheNeverScene - Yes I also feel like I'm halfway between trying to create a useful spread vs. Just calling it with a simple merit system and leaving all those 4's and 5's just be 4's and 5's.


All in all this chat has me reconsidering a bit, and I'm gonna end up shuffling my votes around again before the due date.

And upon further reflection I have no qualms with anybody's system as long as it doesn't involve giving out weird sabotagey 1's to people :D

Post

Interesting Article!

According to these researchers, more available ratings are better up to a *maximum of 11* before the law of diminishing returns starts making this worse: http://waypointgroup.org/why-a-0-10-sca ... st-option/

Edit: oops! wrong link. Fixed.
Last edited by mostlyjustj on Sat Mar 03, 2018 11:35 pm, edited 1 time in total.

Post

mostlyjustj wrote:
TheNeverScene wrote: So far, my only 1's are me and Taron.....
Haha.... just got that one :hihi: :clap:
Haha, for a second I thought I was the only "one"! And no, I did not do that for anyone who may be scratching their heads.......not even close.
Just a touch of EQ and a tickle of compression

Post

All of my amps go to 11.....my wife hates it!
Just a touch of EQ and a tickle of compression

Post

mostlyjustj wrote:And upon further reflection I have no qualms with anybody's system as long as it doesn't involve giving out weird sabotagey 1's to people :D
That's a relief, as I could then test my system derived from the Scoville organoleptic dilution method. As they have traditionally rated the heat level of different peppers... or dang, maybe a direct Wikipedia quote: in order to arrive at a rating, "an exact weight of dried pepper is dissolved in alcohol to extract the heat components (capsaicinoids), then diluted in a solution of sugar water. Decreasing concentrations of the extracted capsaicinoids are given to a panel of five trained tasters, until a majority (at least three) can no longer detect the heat in a dilution. The heat level is based on this dilution, rated in multiples of 100 SHU. [Scoville Heat Units]"

So, if you have a track that is so full of heat you absolutely have to dilute at least five aspects of it until the heat is gone, it's five OHU (OSC Heat Units, better skip the multiplication by a hundred in this scale though, hmmh).

And uh... It's hard, I might be overthinking this.

Post

Playing devil's advocate here for those who favor relative voting.

One month, we do a "new to the OSC" contest where you have to be new to the OSC and can't have been recording synths for more than 30 days. I mean you're BRAND new and most likely pretty bad at this stuff. And let's say we actually get 30 honest people who truly have never done this before and are green as can be.

The tracks are submitted and they are ALL God awful. I mean REALLY bad. No limiting where every track is clipping and distorted (and not the good kind of distortion either) there is no melody or harmony to any of the songs, nothing's in beat, the mixes are horrendous. Drums so loud that you hardly hear anything else. Everything panned to the left or right. I mean think of everything you could possibly think of to totally ruin a piece of music.

Now you're confronted with having to LISTEN to these 30 tracks, that's if you can even manage to get through all that noise. There is no doubt that EVERY track is a 1 and not even. I mean we're talking ZERO.

You're actually going to wade through that mess and try to pick out the best of the worst? What's your criteria? How little over clipping they went? Who had the least amount of distortion?

Granted, this is an extreme case. But the point is, every track is as bad as it can possibly be. Why drive yourself crazy trying to find 2s, 3s, 4s and 5s out of a garbage dump? Is it even possible when things are THAT bad.

I wouldn't even attempt to waste my time. In fact, after 30 seconds of listening to something obviously that bad, I wouldn't even finish listening. I'd turn it off and go on to the next one. So after 900 seconds (30 tracks) or 15 minutes, I would be done listening to this train wreck and could go and do something productive with my day.

In the meantime, somebody into relative voting, is going to wade through that ocean of carp and spend who knows how many hours trying to pick a "best" track.

To me, that's absolute madness and I pity the poor soul unless he just loves listening to horribly recorded music.

If BJ made a rule that all scoring had to be relative and therefor we had to explain each of our votes relative to the other votes, I'd never do another OSC again as long as I lived. With a straight scoring system, it's simple.

5 - Excellent
4 - Very Good
3 - Average
2 - Below Average
1 - Poor

And I could then even qualify, mechanically, what constitutes each score (Excellent, great melody and harmony, great mix, exciting and/or beautiful, etc.)

I know, it sounds really clinical and cold. But when you're scoring a contest where you have to give a definitive value (can't just say "Man, I really love that track") that's the only way to be. Otherwise, you start second guessing yourself and up playing see-saw with your votes where your mouse cursor is bobbing between one number and another and can't decide where to land. I've played that game. It sucks. It's annoying as hell.

With a 5 point, cold, hard, statistical system, it's essentially robot time.

"Great harmony, great melody, great mix, amazing sound deisgn...You're a 5. Next."

Not "Oh gee, you're great but he's great too and I can't give you both a 5 because I like yours a little better than his but I can't knock his down but I can't cheat you out of the better score but..." and on and on and on until they end up locking me up in a padded room.

No thank you.

Post

Guenon knows waaaay too many big words for me and my calculator only square roots, so I'm doomed.
Just a touch of EQ and a tickle of compression

Post

Guenon wrote:
mostlyjustj wrote:And upon further reflection I have no qualms with anybody's system as long as it doesn't involve giving out weird sabotagey 1's to people :D
That's a relief, as I could then test my system derived from the Scoville organoleptic dilution method. As they have traditionally rated the heat level of different peppers... or dang, maybe a direct Wikipedia quote: in order to arrive at a rating, "an exact weight of dried pepper is dissolved in alcohol to extract the heat components (capsaicinoids), then diluted in a solution of sugar water. Decreasing concentrations of the extracted capsaicinoids are given to a panel of five trained tasters, until a majority (at least three) can no longer detect the heat in a dilution. The heat level is based on this dilution, rated in multiples of 100 SHU. [Scoville Heat Units]"

So, if you have a track that is so full of heat you absolutely have to dilute at least five aspects of it until the heat is gone, it's five OHU (OSC Heat Units, better skip the multiplication by a hundred in this scale though, hmmh).

And uh... It's hard, I might be overthinking this.
Ya, I went with the capsaicinoid-taster method LAST month, but geez... TRY to find 5 well-trained tasters out there in a small urban center these days.... there was this one rookie out of my 5 who I swear couldn't taste *shoe polish* if his lfe depended on it! Kept skewing my votes!! :smack:

Post

.....as I dip my tongue in the KIWI :scared:
Just a touch of EQ and a tickle of compression

Post

wagtunes wrote: ...The tracks are submitted and they are ALL God awful. I mean REALLY bad....
That's no problem for me as a relative scorer. Everything gets a 3. Or 2. Or 1. Doesn't matter!
It's only important when there are differences that you hear... you don't have to go looking for them for the sake of it :D

For me it's much more often something like this: [Boring details alert - ignore below at will :) ]

Man these few are top quality, so they have to be 5.
(Actually, there is this one that I would love to give 6, but to do that I would have to move them all down, so never mind.)
Then there are these bunch that are great. OK, they get 4.
Then these are pretty good, but not great like the 4s... OK, they get 3.
What about this bunch that are just OK? I don't want to give them 2s, as people think that means I think they are 'bad''. But it's not fair to the 3s to give them the same, 'cos the ones I already gave 3 are definitely better imo. Hmmm.
Oh, and then there are these couple that I think have some serious issues, along with some good qualities too. Can I use a 1? Not without people thinking I am saying they are terrible. So maybe put them in with the 2s? But that's not fair to the 2s... and I'm already feeling uncomfortable giving them 2 if they will think it means I'm saying they are 'bad'. They should really be 3s if people think that means 'ok' 'cos that's what they are. But I've used the '3' slot for the ones that are pretty good but not great. Hmmm.

It's impossible to give the score I want without using at least 5 buckets, but I'm not willing to say that the 1s and the 2s are 'bad'.
This is literally what I feel most months, especially when there are loads of entries. Constrained to 3 scores unless I really think something is 'bad' which I usually don't.

Sure, I can ignore the concern and just dish out 1s and 2s more - I know some other people do - but if I know people are going to think that means I think their tracks are crap, then I can't do that with integrity unless I do think they are crap, which is unusual.
So there is no room to maneuver. It all gets squashed and over-simplified and unsatisfying.

(But not so unsatisfying that I don't come back month after month so it's not that bad 8) )

A 1-10 or 1-11 scale would solve this completely for me. So Wags, we agree with the solution, while coming from a very different start point :)

Post

mostlyjustj wrote:Interesting Article!

According to these researchers, more available ratings are better up to a *maximum of 11* before the law of diminishing returns starts making this worse: http://waypointgroup.org/why-a-0-10-sca ... st-option/
Just read that - that's interesting, thanks!

Post

Voted.
If you love a track, give it a 5. If you like it a lot, give it a 4. If you like it a little, give it a 3. If you don't like it very much at all, give it a 2. And if you like a lot less, give it a 1.
quoted from a previous post but edited the last part :-)

This totally works for me. If you think about it, the end product i.e. the submitted track sums up everything. It shows one's sound design, composition, arranging, mixing and mastering skills. For a track to be at the top, it needs to be top in each and every area. Pick any track that ended up at the top and study it, you will find out just good sound design or good mixing doesn't get it to the top.

One can spend helluva of time in sound design and come up with excellent kicks, snares, pads, plucks, etc. etc. but if all these sounds doesn't come together in a track then it's not gonna do well.
It's understandable that one may not be good in all areas. This is where these community contests help. With some constructive feedback from other participants one can learn to master these skills. At the end it all boils down to how much effort you are putting on your track. Believe me, the effort also shows through your track.
satYatunes - Sound and Graphic Designer
Beautiful UI and skins for VST plugins.
Website | Portfolio

Post

The discussion shows, that many have difficulties in giving their votes.
The more narrow the scale is, the less variance, the “easier” the ranking is. I would
recommed a 3-step scale. This would, of course, fade out some resolution, too. But maybe this is, what many people are hoping for.

However, I think the main problem here is not the scale, but the way the voting is implemented. Although one person in this thread, not in that mature way, called me names, the voting should be anonymous. After all, the idea is to rank songs/productions, not person.
The anonymous voting can be implemented in two alternative ways: either the song writer names are hidden, or the voter names are hidden.

At the moment the discussion atmosphere here reminds time to time the highshool girls popularity contest.

The paradox here is, that many people seem to think, that the anonymous voting is dubious and secretive, and the open voting is free and “fresh”. Its on the contrary: its not coincincidence, that how votings in most of the national and municipial elections are implemented, closed votings.

I have been accused to rank songs low, because I dislike a person. On the other hand, an other person here was bitter, because I ranked low his production, although I praised an other song of him earlier, and he did a favour to me. Quote from that person: “Moral of the story...X helped me get my first placing ever, Z just wasted f'ing time and I look forward to ones from Z from here on out.”

A strange “moral of the story”. You presume, that your friends, and those to whom you have done favour, give you high scores, and person who dislikes you, rank your songs low. :dog:
And the person openly declares giving a low score, only because someone else gave him one earlier.
None in this thread commented that. And in this thread, legitime moral seems to be, that sending private pressure mails between the votings, seems to be absolutely OK for the people in this thread.

The third person here has tried to prove, “statistically” I can´t have those opinions of the songs, which the voting reflects. Isn´t that paradoxal: according to those accusations, I give low rankings because of the disliking of one person, AND because I don´t understand reward the other person by giving him a higher score, although he has done a favour to me.

To me this kind of moral approaches corruption and cronyism.
I mentioned earlier in this thread the cultural differencies: this kind of moral is far away of the values, I am for.

Post Reply

Return to “Instruments”