I seem to re-edit the same paragraph which usually begins with a definition or something of that sort, the closest relation, equivalence, some kind of interaction described between zero, one, two or more objects. Then I update the same paragraph updating each term, each word of the entire space, and like growing banyan trees, the meaning evolve into larger and larger polymorphic structures. I realize I must structure the way I write about functional words, since being functional implies that a description of a functional process also remains functional in its core.
So we will first identify the abstraction, like there is a
type identity a, the minimum degree of interaction, to identify, to simply name a thing, the abstractions often remain nameless until ripe with breath and depth of understanding they appear on the surface, transforming the shape we name a typeclass, a class of objects, of transformations, of morphisms, functions, a
Functor which is dually realized a set of instances of a
Functor class of types, instances realized as subsets of the Functor set of all instances. But then this would be wrong I presume, since a functor is not a set theoretical term, it cannot be, since it implies a relation, describing only betweeness between objects, groups of objects. The name
Functor comes from abstract mathematics, from category theory, where a category, which is a collection of objects with structure, structure being represented with arrows making the objects as instances of these morphisms, again which are represented as interlinked objects with arrows. How would you represent an morphism, or some action in timespace? Is this like a scalar, showing some function between two more concrete things, like the speed of a moving object? And functoriality itself is represented with
f, the builder, the creator, or to use the proper term, constructor
f, which is declared immediatelly after the name of the typeclass. Like this:
class Functor f where
So this is a general definition of a Functor, like an identity we are not even aware of the transformation in place, only that there is a functorial function which we recognize as some
Functor f operating on some space. So how is
fmap :: (a -> b) -> f a -> f b
f is still an undefined morphism, like some
function = undefined or
action = unknown so the first thing that we might do, is to apply a morphism of our own, the one we learn in the beginning itself, the identity morphism, the only arrow that goes from an object to itself. And we would call this a
class Functor Identity where instance of a typeclass
fmap :: (a -> b) -> Identity a -> Identity b
We realize that an identity of some sort, named
a, which is a variable placeholder in time and space, a mini abstraction that will hold another name, an instance of a
Functor class is here being mapped over to another identity named
b. But isn't identity a function going from
a -> a like
Identity :: a -> a? There is a relation of some sort of equivalence at play here because we could also write
Identity :: b -> b or
Identity : program -> program and it would still remain the same, meaning the unknown relation would still remain the same unknown relation, there is nothing to change in the process of
a -> a transformation, otherwise how would we know we got the right thing itself? Would getting another
a than the first constitute an unsolvable randomness, which as a first step of some sequence, an action would bring us to a state of halt? What is different here, and fascinating to realize, because we are beginning to not just order our types like in a sequence but we are beginning to apply more functions, passing more pure state into the definition. First we had almost like a view from a distance,
a -> a and then we got closer and we raised our dimension level by one, we so to say typed our abstraction level with a unique function that is mapping those
a -> a like
f :: a -> a we name it a
Functor :: a -> b, but why not
Functor :: a -> a? Because how would we proof that
Functor is any different than
Identity? Again we are seeing I think this might be called alpha equivalence, a strange but obvious transformation where something seems to change and yet we remain the same, at least now while still on lower levels of abstraction plane.
But are type constructors types themselves? No, because that would break the symmetry and I wonder if this itself would have been an instance of "set of itself" famous paradox described by Russell? Why is that? Does a constructor has an identity? I just notice that function application associates to the right, type application associates to the left.
Wikipedia says, "In type theory, a kind is the type of a type constructor or, less commonly, the type of a higher-order type operator. A kind system is essentially a simply typed lambda calculus 'one level up,' endowed with a primitive type, denoted * and called 'type', which is the kind of any (monomorphic) data type."
It is fascinating to realize that kinds of types is like types of functions. But how do we realize the difference between these two levels, and to be precise, these two levels are typed, otherwise we would not be sure, clearly separating the two abstractions. We could almost reverse engineer this into its proper types, like a word puzzle, we only see the movements of types and yet no variable names are there, because even identifying
a from some random remainder
b would push us out of the abstraction into the application realm.
Int :: * Maybe :: * -> * Maybe Bool :: * a -> a :: *  :: * -> * (->) :: * -> * -> *
Damn, I just love Haskell. Do you?
See Basic Type Level Programming in Haskell by parsonsmatt on https://www.parsonsmatt.org/2017/04/26/basic_type_level_programming_in_haskell.html for a clear description of the function to type, type to a kind of type flow. I myself could finally follow along the article with intuition too.