A few weeks back, Chip Collection wrote a little post asking if all forms of sound synthesis had been invented.
The answer is "of course not" - there will always be more methods of generating sound discovered and implemented, at least by researchers if not musicians.
But the real question to ask is "Does it even matter?"
The art or practice of subtractive synthesis (what a Moog, Prophet-5, Oberheim, and all the other non-DX-7 classic synths use) has been refined and matured to a point where its techniques are robust and powerful. One cannot make "any" sound with it, but the range of sounds a talented programmer using a decently flexibile instrument can create is staggering.
Subtractive synthesis starts with an oscillator generating a simple waveform (triangle, sawtooth, or pulse) with rich harmonics. A filter (usually low pass) is applied to subtract some of the harmonics of the base waveform, thus changing its timbre (and giving the synthesis method its name). Add some envelopes to control volume and the filter over time, and some other modulation and you're done.
When one adds custom waveforms to the usual triangle-sawtooth-pulse combinations, it gets more interesting - other waveforms provide different harmonics to manipulate with the filter and thus produce different timbres. Synthesizers like the Korg Wavestation used this technique. Factor in wavetables (a sequence of waveforms) or samples (very long custom waveforms) combined with subtractive techniques and the possibilities multiply.
Frequency modulation (FM - what DX-7s use) is also extremely powerful, but far more complex to master. Then there's granular synthesis (limited use in instruments today) and additive synthesis.
There are already entire universes of sound that musicians won't explore. Most players use the presets in their synthesizers with little modification. Programming synthesizers becomes more difficult as the instrument gets more powerful. Learning to or wanting to create new sounds is a different thing than learning to or wanting to compose or perform. Many people end up specializing in one or another. After all, how many composers build instruments?
With today's mature synthesis methods, the ability to combine those methods (either in a single instrument or by layering instruments), and the incredible array of effects available, it is easily possible to transform any arbitrary sound into almost any other arbitrary sound using simple tools on a cheap PC. The field is far from played out.
Given that it is usually frustration, dissatisfaction, or boredom with the status quo that drives inventors or artists to look for something new, it's not difficult to see why there's less going on here. When one factors in the business reality that most customers don't want more complex instruments to program and can barely wrap their heads around the existing decades-old technologies, it becomes less likely that we'll see commercial development of new synthesis methods. However, new user interfaces for synthesis that either reduce complexity or try to work in more intuitive or inventive ways are highly likely.
One wonders what Karlheinz Stockhausen could have done had he been interested in using all of today's modern technology and developing new methods of synthesis.