I'm interested in building my own programming language and I want to know: what are your most-loved and most-hated features of any programming languages?
Here are a few of mine:
When I create my own language (soon, hopefully), I would love for it to emulate R's paradigm where scalars are just length-1 vectors:
> x <- 3 > length(x)  1 > x <- c(1,2,3) > length(x)  3
...this means (as you can see above) that you can use methods like
length() on scalars, which are actually just length-1 vectors. I would like to extend this to matrices of any dimensionality and length, so that every bit of data is actually an N-dimensional matrix. This would unify data processing across data of any dimensionality. (Though performance would take a hit, of course.)
I also love Haskell's
Integer type, which is an easy-to-use, infinite-precision integer:
ghci> let factorial n = product [1..n] ghci> factorial 100 93326215443944152681699238856266700490715968264381621468592963895217599993229915608941463976156518286253697920827223758251185210916864000000000000000000000000
BigDecimal which are arbitrary-precision integers and floating point numbers, respectively. Since a user never enters an infinite-precision floating point number, it should be possible to keep track of the numbers entered and used, and only round / truncate the result when the user prints or exports the data to a file. You could also keep track of significant digits and use that as the default precision when printing.
Imagine, for instance, that instead of calculating
x = 1/9 and truncating the result at some point to store it in the variable
x, you instead keep a record of the formula which was used to construct
x. If you then declare a variable
y = x * 3, you could either store the formula in memory as
y = (1/9) * 3 or, recognize that the user entered
9 as integers, and simplify the formulaic representation of
y internally to
(The way I see it, if this were implemented in a programming language, it would mean that there's no such thing as a "variable", really. Every variable would instead be a small function, which is called and evaluated every time it is used.)
Forgoing that simplification, you could have
y refer to
x whenever it's calculated and implement a spreadsheet-like functionality, where updating one variable can have a ripple effect to other variables. When
inspect-ing it, you could display the formula used to calculate it. Or something.
Finally, one language feature which I would never wish to emulate is Java's number hierarchy.
In Java, there are primitives like
boolean, which are not part of Java's class hierarchy. These are meant to emulate C's basic data types and can be used for fast calculations. They are some of the only types not descended from Java's overarching
Object class. As Java does not support operator overloading, the arithmetical operations
/, and so on are only defined for primitives (and
+ is overloaded internally for
Strings). So if you want to do any math, you need one of these primitive types... got it?
Well, Java also has wrapper classes for each of the primitive types:
Boolean, and so on. (Note also that
int is wrapped by
Integer and not
Int. Why? I don't know. If you do, please let me know in the comments.) These wrapper classes allow you to perform proper OOP with numbers. Java "boxes" and "unboxes" the number types automatically so you don't have to convert
int manually to do arithmetic.
It gets worse! The number class hierarchy in Java is flat, meaning that all of the numeric classes (
Long) descend directly from
Number implements no methods to do anything other than convert a given
Number to a particular primitive type. This means that if you want to, for instance, do something as simple as define a method which finds the maximum of two
Numbers, you need to first convert each number to a
Double and then use
Double.max() for comparison. You need to convert to
Double so you don't lose precision converting to a "smaller" type (assuming you're not also accepting
BigDecimals, which makes this even more complicated).
Number (for the love of god) doesn't even implement Java's
Comparable interface, which means you can't even compare a
Float to a
Double without Java unboxing them to primitives, implicitly casting the
float to a
double and then performing the comparison.
I wouldn't wish Java's
Number hierarchy on my worst enemy.