Idk I've read a lot of Selridge's comments up and down the whole post now and it really seems like any idea of taste to them defaults to classism and then they misapply that framework here, which is realistically one of the fairest arenas.
If someone likes what you make it doesn't matter where you come from.
It doesn’t default to class, people just pretend class doesn’t apply at all.
Taste is often advanced as this subjective yet ultimately discriminating notion which refuses to be pinned down. Insistent but ineffable. This idea that you and I know what good software is due to having paid dues and they don’t, and the truth will out, is a common one!
My argument isn’t that it’s class. It’s that this framework of describing taste is PURPOSE BUILT to ignore questions like status, access, and money in favor of standing in judgment.
I hear you, but I at least try to disarm that notion. I even have a footnote talking about how taste is entirely group dependent and measured by reception so while I think your point is more broadly applicable I feel it has less to do with what I was writing about which is broadly in the technical realm I feel pretty meritorious.
We are in the middle of an earthquake. The 90s was like this, but it’s bigger. Radical changes in what it means to build software are happening right now. That will without a hair of a doubt result in equally radical changes in what constitutes good and bad work.
Maybe, just maybe, the thing that seems really durable (taste) is already getting put into a blender that’s still running.
Quite the opposite actually. certain live coding languages give you the tools to create extremely complex patterns in a very controlled manner, in ways you simply wouldn't be able to do via any other method. the most popular artist exploring these ideas is Kindohm, who is sort of an ambassador figure for the TidalCycles language.
Having used TidalCycles myself, the language lends itself particularly well to this kind of stuff as opposed to more traditional song/track structures. And yet it also constrains and prevents the construction of bad programs in a very strict manner via its type system and compiler.
It's also notable for being probably the only Haskell library used almost exclusively by people with no prior knowledge of Haskell, which is an insane feat in itself.
> Quite the opposite actually. certain live coding languages give you the tools to create extremely complex patterns
I think I must not be expressing myself well. These tools seem to be optimized for parametric pattern manipulation. You essentially declare patterns, apply transformations to them, and then play them back in loops. The whole paradigm is going to encourage a very specific style of composition where repeating structures and their variations are the primary organizational principle.
Again, I'm not trying to critique the styles of music that lend themselves well to these tools.
> And yet it also constrains and prevents the construction of bad programs in a very strict manner via its type system and compiler.
Looking at the examples in their documentation, all I see are examples like:
So it definitely isn't leveraging GHC's typechecker for your compositions. Is the TidalCycles runtime doing some kind of runtime typechecking on whatever it parses from these strings?
> It's also notable for being probably the only Haskell library used almost exclusively by people with no prior knowledge of Haskell, which is an insane feat in itself.
I think Pandoc or Shellcheck would win on this metric.
> So it definitely isn't leveraging GHC's typechecker for your compositions. Is the TidalCycles runtime doing some kind of runtime typechecking on whatever it parses from these strings?
the runtime is GHC (well GHCi actually). tidal's type system (and thus GHC's typechecker) ensures that only computationally valid pattern transformations can be composed together. if you're interested in the type system here's a good overview from a programmer's perspective https://www.imn.htwk-leipzig.de/~waldmann/etc/untutorial/tc/...
these strings are a special case, they're formatted in "mini-notation" which is parsed into composed functions at runtime. a very expressive kind of syntactic sugar you could say. while they're the most immediately obvious feature of Tidal (and have since been adapted in numerous other livecoding languages), mini-notation is really just the tip of the iceberg.
>The whole paradigm is going to encourage a very specific style of composition where repeating structures and their variations are the primary organizational principle.
but that applies to virtually all music, from bach to coltrane to the beatles! my point is that despite what the average livecoder might stream/perform online, live coding languages are certainly not restricted to or even particularly geared towards repetitive dance music - it just happens that that's a common denominator of the kind of demographic who's interested in livecoding music in the first place.
i'd argue that (assuming sufficient knowledge of the underlying theory) composing a fugue in the style of bach is much easier in tidal than in a DAW or other music software.
on the more experimental end, a composition in which no measure ever repeats fully is trivial to realize in tidalcycles - it takes only a handful of lines of code to build up a stochastic composition based on markov chains, perlin noise and conditional pattern transformations. via the latter you can actually sculpt these generative processes into something that sounds intentional and follows some inner logic rather than just being random.
the text-based interface makes it much easier to use than anything GUI-based. it's all just pure functions that you can compose together, you could almost say that Tidal is like a musical equivalent of shell programs and pipes. equally useful and expressive both for a 10 year old and a CS professor.
>I think Pandoc or Shellcheck would win on this metric.
> i'd argue that ... composing a fugue in the style of bach is much easier in tidal than in a DAW or other music software. on the more experimental end, a composition in which no measure ever repeats fully is trivial to realize in tidalcycles - it takes only a handful of lines of code to build up a stochastic composition based on markov chains, perlin noise and conditional pattern transformations. via the latter you can actually sculpt these generative processes into something that sounds intentional and follows some inner logic rather than just being random.
I agree that it's easier to build a composition in a coding environment that uses stochastic models, markov chains, noise, conditions, etc. But I don't think that actually makes for compelling music. It can render a rough facsimile of the structure, but the result is uncanny. The magic is still in the tiny choices and long arc of the composition. Leaving it to randomness is not sufficient.
Bach's style of composition _is_ broadly algorithmic. So much so that his style is taught in conservatories as the foundational rules of Western multi-voice writing, but it's still not a perfect machine. Taste and judgment have to be exercised at key moments in the composition on a micro level. You can intellectually understand florid counterpoint on a rules-based level, but you still have to listen to what's being written to decide if it's musically compelling or if it needs to be revised.
The proof is in the pudding. If coded music were that good, we would be able to list composers who work in this manner. We might even have charting music. But we don't, and the best work is still being done with instruments in hand, or written on a staff, or sequenced in a DAW.
I want this paradigm to work - and perhaps it can - but I've yet to hear work that lives up to the promise.
If you use TidalCycles, the standalone version, you can pipe it out to as many midi busses as you want - it’s excellent controlling Reason, for example.
Had to do the same with my IdeaPad Flex 5, which always felt super infuriating.
I ended up so frustrated with it that i've ultimatly caved and bought a macbook. I guess i'll have to stay on desktop machines for x86 until someone finally figures out how to make laptop that runs linux and stays silent under heavy load.
From my own experience and that of many peers I've talked to it really seems to benefit DJing as a skill. It's an activity where you get to be hyperfocused and freely associating/improvising at the same time. Not really a career-path I'd recommend to anyone though.
regarding those usual objections, i'd argue that a spectrograph representation of a given piece of audio is just a different (lossy) encoding of the same content/information, so any hypothetical objections would still apply here.
You would be absolutely correct. the lossiness is in the resolution of the image (512x512 is pretty terrible) but given enough image resolution it's just an FFT transform, and the only reason that stuff falls short is because people don't give it, in turn, enough resolution. If you did wild overkill of the resolution of an FFT transform you could do anything you wanted with no loss of tone quality. If you turned that to visual images and did diffusion with it you could do AI diffusion at convincing audio quality.
In theory the tone quality is not an objection here. When it sounds bad it's because it's 512x512, because the FFT resolution isn't up to the task, etc. People cling to very inadequate audio standards for digital processing, but you don't have to.
great stuff, while it comes with the usual smeary iFFT artifacts that AI-generated sound tends to have the results are surprisingly good.
i especially love the nonsense vocals it generates in the last example, which remind me of what singing along to foreign songs felt like in my childhood.
I played with Tidal when it first came out for music. I do like it. To me though, I prefer Grace[1] (CM, Lisp) and Extempore[2] (Scheme, xtlang). You can work within the grammar of music, or you can program your own DSPs. Xtlang in Extempore is a pretty neat DSL created by Andrew Sorensen. To me Tidal was mainly rhythm-based. I am sure it has evolved since then, but it felt like a coding sequencer or pattern generator. I liked Sporth, (Soundpipe + Forth), but it appears something has occurred on the internets that I am not aware of. The repo is still here[3], but it is not being worked on any longer from what I have read there.
If interesting and unusual is your thing for generative music, check out Orca[4]. That has eaten hours of my time far more than Tidal for me.
This instantly reminded me of the paper "pattern recognition in a bucket"[0], which I've seen referenced a lot when I first started reading about AI in general. I only have surface-level knowledge about the field, but how exactly does what's described in the article differ from reservoir computing? (The article doesn't mention that term, so I assume there must be a difference)
In this PNN approach you are solving for what additional stimuli, when applied to the system alongside the inputs, produce the desired result for a given input. In reservoir computing (RC) you don’t bother to provide any additional stimuli, and find the linear combination of reservoir outputs that gives the desired result. Training the former is more demanding and analogous to a NN (thus the name), but directly produces your answer from the system. The latter is very easy to train (one regression) but requires post processing for inference.
reply