Just as freedom in society helps foster creativity, freedom in programming languages allows programmers to come up with interesting solutions to different problems. By restricting what programmers can do, or by telling them the way they should be doing things, you are effectively restricting their freedom in coming up with interesting solutions. As a language designer, you should not assume that you are smarter than the programmers that will be using your language (although in the majority of cases this is true).
Just as the presence of people who need leading in a society creates a demand for the reduction of freedom in that society, the presence of programmers who don't know what they're doing creates a demand for languages that will bring the hammer down and tell them what to do. Many programmers need to be saved from themselves, given the proper tools (operator overloading, macros, open classes) they will surely wreak havoc. Giving a large choice of tools that they do not understand to them will only confuse them and when it is time to make a choice about which tool to use, they may hesitate and choose the wrong one. Languages like Java (and possibly Python, although I haven't worked enough with Python enough to make this claim valid) decide that there is only one way to go about a problem, alleviating programmers from having to decide which way to go about doing things - even if this way of going about doing it may not always be the most optimal. This is why I'm in support of these types of languages, as they are excellent for business. They reduce the risk of programmers screwing things up and turn a potential Chernobyl into a burning house.
For good programmers, or when risk is acceptable, these languages do not serve any purpose except for the satisfaction of the programmers who like these languages. They instead have a negative effect, restricting the programmer from experimentation with different ideas. Experimentation is the key to the evolution of science, and although it may sometimes have disastrous effects, it may also lead to discovery of new ideas (like how we can tackle this damn concurrency problem we're coming to). Things like macros allow programmers to tweak the language to make it better - how long has foreach been in the Boost libraries when it should really be part of the C++ language itself? Just as extensions for Firefox give hints to the developers of Firefox what should be in the browser (see Jeff Atwood's article for more), macros and things like them show the language developers what people are actually using and should be built into the language.
Many people argue against adding features, saying that it over-complicates a language. Simpler is better, they say. While they are correct on some issues, they are wrong on others. If you need to use all the features to program in the language, then they are problematic. However if you only need to use 20% of the features to do what you need to do, then what's wrong with having the other 80%? I would argue that it is better to have a language with a lot of features that allow you to do things in different ways, but not making all those features necessary to complete a given task. Joel Spolsky makes an argument that for Microsoft Office 80% of the people only use 20% of the features, but each person uses a different 20%. So Office supports a shit-ton of features that most people never use. Why don't programming languages do the same? Well, there are a few that do: C++ comes to mind. I can imagine there are several obscure languages that have even larger feature sets, although I can't think of any at the moment. One thing I love about programming languages is learning them, or new features about them. When I first started programming in Ruby, I would program away in it and then randomly stumble across something I'd never known about before (ie. the ||= or &&= operators, or that you could put if/unless AFTER the statement that you want to be conditional). I love discovering new features, and I can imagine that many other programmers also enjoy this. I can then use this newfound knowledge to change the way I code, or use whatever features I think that would result in a more optimal solution to a problem.
Pruning features is a good way to keep crazy people from doing crazy things, but it is a great way to limit the way programmers can do things. Try writing a complex number or vector class in Java: the formula z^2 + c (from a Mandelbrot set project I did once upon a time) becomes z.times(z).plus(c). Ugly! Pruning features from a language is like turning English into newspeak. Could you imagine a writer writing anything without using synonyms or antonyms? It would make reading books and blogs a lot more boring.
There are many additions to languages that save the programmers from themselves without restricting freedom. Things like the interface keyword add restrictions, however they help programmers convey meaning to other programmers. Things like typedefs and enums are very useful for conveying meaning. It's like writing comments, without actually writing comments.
I would really like to talk about garbage collection, which is an excellent improvement to languages. Although it does incur some overhead, it really gets rid of the pain in the ass bugs caused by dangling pointers or memory leaks. This is not restricting freedom, it is propping you up so that you don't have to worry about the nuances of the machine. I'm of the opinion that just as more modern compilers are generally able to generate faster assembly code than assembly programmers, garbage collectors may eventually outpace programmers using manual memory management in that they may get better at optimizing memory for cache, reducing the risk of memory fragmentation, etc. However, I believe there will always be a need for manual memory management in systems with limited processing power, such as embedded systems.
I believe that freedom fosters creativity, and it is this creativity that will allow us to solve our current problems, or come up with better solutions to older problems. Why limit ourselves?