I come from functional languages (e.g. Haskell) and I enjoy a lot on typeclasses to achieve polymorphism which is also a structural approach to implement ad-hoc overloading.
However, recently I'm starting to understand OOP's way to model real problems and I'm curious why do we need dynamic polymorphism in OOP languages (such as Java). In my experience, most of function call can be resolved during compile time as many functional languages do not support subtyping.
So my problem is that, In what kind of situation do we have to use dynamic polymorphism instead of compile-time polymorphism? My guesses are:
In Haskell, we replace "dynamic polymorphism" with higher-order functions.
Consider the following problem: we want to define a type which denotes a predicate. We will eventually use this type Predicate when we implement our Lists so that we can define the filter function. We would like to be able to easily define the equality predicate, the less-than predicate, and be able to combine two predicates by joining them with "and".
A reasonable Java attempt would look like this.
interface Predicate<T> {
public abstract boolean predicate(T x);
}
class EqualsPredicate<T> implements Predicate<T> {
private final T value;
public EqualsPredicate(T value) {
this.value = value;
}
@Override
public boolean predicate(T x) {
return this.value.equals(x);
}
}
class LessPredicate<T implements Comparable<T>> implements Predicate<T>{
private final T value;
public LessPredicate(T value) {
this.value = value;
}
@Override
public boolean predicate(T x) {
return this.value.compareTo(x) < 0;
}
}
class AndPredicate<T> implements Predicate<T> {
private final Predicate<T> p1;
private final Predicate<T> p2;
public AndPredicate(Predicate<T> p1, Predicate<T> p2) {
this.p1 = p1;
this.p2 = p2;
}
@Override
public boolean predicate(T x) {
return p1.predicate(x) && p2.predicate(x);
}
}
In Haskell, the answer to this conundrum is obvious. We just define
type Predicate t = t -> Bool
makeEqualsPredicate :: Eq t => t -> Predicate t
makeEqualsPredicate = (==)
makeLessPredicate :: Ord t => t -> Predicate t
makeLessPredicate = (<)
makeAndPredicate :: Predicate t -> Predicate t -> Predicate t
makeAndPredicate p1 p2 x = p1 x && p2 x
-- or, even more succinctly, makeAndPredicate = liftA2 (&&)
Java allows "dynamic dispatch" of methods through subclassing. Haskell allows "dynamic dispatch" of functions through higher-order functions.
But wait, you say. Predicate was an interface with only one method. What should we do if we want to have two methods?
Well, if an interface with one method corresponds to a function, an interface with two methods must correspond to a pair of functions. This is just the OOP principle known as "composition over inheritance".
So we can always replace Java-style dynamic polymorphism with Haskell-style higher-order functions.
In fact, you actually see this observation in modern Java as well. As of Java 8, you can add the annotation @FunctionalInterface
to an interface with one method, which permits you to create instances of that interface using lambdas. So you could write in Java 8
@FunctionalInterface
interface Predicate<T> {
public abstract boolean predicate(T x);
public static Predicate<J> makeEqualsPredicate(J t) {
return (x) -> t.equals(x);
}
public static Predicate<J implements Comparable<J>> makeLessPredicate(J t) {
return (x) -> t.compareTo(x) < 0;
}
public Predicate<T> makeAndPredicate(Predicate<T> other) {
return (x) -> this.predicate(x) && other.predicate(x);
}
}
With many people's help, currently I've got some of answers I want after reflecting lots of designs. Since Rust has both nice support for static and dynamic polymorphism, I shall use Rust in this answer to demonstrate my points.
I now have 2 points for dynamic dispatch: user-friendly scalability and smaller compiled size .
Many people argue that dynamic dispatch is suitable for a situation where you have a container to collect various kinds of objects(of course, different types). For example:
trait MyTrait {
fn method(&self, ...) -> ReturnType;
}
type MyContainer = Vec<Box<MyTrait>>;
fn main() {
...
//the_container : MyContainer
the_container.iter().map(... { x.method(...) } ...) //only demo code
}
In code above, on compile time, we only know that the elements are trait objects, which means the program shall use a vptr-like strategy to find which method to use during executing the expression in the main
function.
However, there's another way to implement nearly the same thing:
enum MyContainerTypes {
Type0 A,
Type1 B,
...
}
impl MyTrait for MyContainerType {
fn method(&self, ...) -> ReturnType {
match self {
Type0 a => {...},
Type1 b => {...},
...
}
}
}
type MyContainer = Vec<MyContainerType>;
fn main() {
...
//my_container : MyContainer
my_container.iter().map(... { x.method(...) } ...); //demo
}
In this way, no dynamic polymorphism is required, however, consider the following situation: You are a user of a library which has been designed and you have no access to change definitions like enums
inside the library. Now you want to make your own type of ContainerType
and you want to reuse codes of existed logic. If you are using dynamic dispatch, the work is simple: just make another impl
of your custom container type and everything's fine. Unfortunately, if you are using the static version of the library, it may become a little hard to achieve this goal...
Languages like Rust may have to compile a generic function many times, once for each type it’s used with. This could make the binary large, a phenomenon called code bloat in C++ circles. Let's consider a simpler case:
trait MyTrait {
fn method(&self, ...) -> ReturnType;
}
fn f(x: MyTrait) { ... } //MyTrait is unsized, this is unvalid rust code
fn g<T: MyTrait>(x: T) { ... }
If you have lots of functions like function g
, the compiled size may become larger and larger. However this should not be a big issue since most of us have the luxury of ignoring code size for plentiful memory nowadays.
In short, although static polymorphism has many advantages over dynamic polymorphism, there're still some corners dynamic dispatch can work better. Personally I really love Haskell-like's way to treat polymorphism(that's also why I like Rust). I don't think this can be the final best and complete answer, discussions are welcome!
It suddenly occurred to me that why not combine the static and dynamic strategies? To allow users to further extend our model, we can just leave a small hole for users to fill in later, like:
trait Custom {
fn method(&self) -> ReturnType;
}
enum Object {
A(Type0),
B(Type1),
...
Hole(Box<dyn Custom>)
}
However, in this way, some operations like clone
may be a little hard to implement, but I think this is still an interesting idea.
Haskell's existential type also has similar function and implementation as dynamic polymorphism in OOP languages:
data Obj = forall a. (Show a) => Obj a
xs :: [Obj]
xs = [Obj 1, Obj "foo", Obj 'c']
doShow :: [Obj] -> String
doShow [] = ""
doShow ((Obj x):xs) = show x ++ doShow xs
I also found that this existential type can be used to hide some details of types and provide cleaner interface for users to use.
Thanks @MarkSaving. There's a mistake in Point 2's code, the dyn trait object is unsized and therefore should be changed to a reference or boxed dyn:
fn f(x: Box<dyn MyTrait>) { ... }
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With