by krabob

What you need to known About OpenGL ES(2) for Mobile Graphics

This is an article to explain quickly the differences between OpenGL for desktop GPU and OpenGL ES(2) for Mobile GPU. You need some OpenGL or 3D API background to understand what it’s about.

Capture d’écran 2014-04-22 à 21.29.32
Some rendering with OpenGL ES 2:  4 passes and 4 shaders are used here.

All Mobile devices today have something in common: a mobile Graphical Processor Unit (GPU) that can be programmed with OpenGL ES and OpenGL ES2. Every iPhone, iPads from apple include a PowerVR, Raspberry Pi embeds a “VideoCore IV”, Android models use all those, plus a wide range of other GPU like NVidia Tegra, adreno, snapdragon. A lot of models, but only 2 API to program them all at the lowest level:
OpenGL ES (quite the same as OpenGL1.x) and OpenGL ES2 (quite the same as.. OpenGL 3)  … and yes, “ES” means a lot of differences…

Everything in this document is both from OpenGL documentation, specific GPU documentations, and my own experience testing with any kind of models.

No, the OpenGL ES driver will not be corrected by a system update in the future.

If you were used to hope for drivers updates on desktop to correct a bug or manage something differently, you have to know that GLES drivers are done once and for all for a given mobile GPU, quite often stands in ROMS and not in the system, and that basically the system (android,…) does not manage that part. and the GPU builder does not care because they just does no support after release.

 Writing on FBO, and using glViewport() and glClear()  differ *Totally*.

 Framebuffer Objects (FBO) is the official way to create offscreen bitmaps and offscreen rendering, and you can link them to a texture Id.
The bad news is Mobile OGL Drivers only manage one FBO context at a time, and a whole FBO or screen are always  internally “tile managed” by the driver, with possibly “implicit superscalar Buffers”, which means for you:

  1. glBindFramebuffer() must be followed by glViewport() and glClear(), if not it will crash on most Mobile GPU. You cannot “come back” to a FBO and continue drawing.
  2. glClear() will clear the whole FBO, not the rectangle given by glViewport(), unlike classic OGL. You absolutely can’t do one glViewport() on half the screen draw, then glViewport() the other half and draw again: impossible on all ES. Due to this, you should always give the whole rectangle to glViewport().
  3.  (Due to 1 and 2) You must render your offscreens FBOs at the beginning of a frame, and then only begin to draw the screen. A nice idea is to have a list of off-screens to render with delegate functions/methods, so your program will automatically “sort” the order of your drawing needs for a given frame.

 Texture Constants for glTexImage2D() are not the same in ES

There is a very wide list of texture pixel internal formats on classic desktop GL, that describes both the encoding and number of component: this is not re-used at all on  OpenGLES(2), because you must have components and encoding separated. So glTexImage2D() have to be used differently. Read your GPU documentation about it.
To program in a way that is compatible on all OpenGL , both mobile and desktop, I have something like this in my headers: (note gl includes files also differs)

#ifdef USE_GLES_TEXTURES
// for OpenGL ES 1/2: textures are not set
// use 16 hight bits to set the format enum(GL_UNSIGNED_BYTE,GL_FLOAT)
// the low 16 bits for the number of component enum:
// watch out, glTexImage2D() isn't used the same way either
typedef enum {
    epf_Byte_R=GL_LUMINANCE|(GL_UNSIGNED_BYTE<<16),
    epf_Byte_RG=GL_LUMINANCE_ALPHA|(GL_UNSIGNED_BYTE<<16),
    epf_Byte_RGB=GL_RGB|(GL_UNSIGNED_BYTE<<16),
    epf_Byte_RGBA=GL_RGBA|(GL_UNSIGNED_BYTE<<16), 
    epf_Float32_R=GL_LUMINANCE|(GL_FLOAT<<16),
    epf_Float32_RG=GL_LUMINANCE_ALPHA|(GL_FLOAT<<16),
    epf_Float32_RGB=GL_RGB|(GL_FLOAT<<16),
    epf_Float32_RGBA=GL_RGBA|(GL_FLOAT<<16),
    epf_Float16_R=GL_LUMINANCE|(GL_HALF_FLOAT_OES<<16),       
    epf_Float16_RG=GL_LUMINANCE_ALPHA|(GL_HALF_FLOAT_OES<<16),
    epf_Float16_RGB=GL_RGB|(GL_HALF_FLOAT_OES<<16),
    epf_Float16_RGBA=GL_RGBA|(GL_HALF_FLOAT_OES<<16),
} ePixelFormat;
#else
// Classic desktop OpenGL stuff:
typedef enum {
    epf_Byte_R=GL_LUMINANCE8,
    epf_Byte_RG=GL_LUMINANCE8_ALPHA8,
    epf_Byte_RGB=GL_RGB8,
    epf_Byte_RGBA=GL_RGBA8, .
    epf_Float32_R=GL_LUMINANCE32F_ARB, 
    epf_Float32_RG=GL_LUMINANCE_ALPHA32F_ARB,
    epf_Float32_RGB=GL_RGB32F_ARB,
    epf_Float32_RGBA=GL_RGBA32F_ARB,
    epf_Float16_R=GL_LUMINANCE16F_ARB,    
    epf_Float16_RG=GL_LUMINANCE_ALPHA16F_ARB,
    epf_Float16_RGB=GL_RGB16F_ARB,
    epf_Float16_RGBA=GL_RGBA16F_ARB,
} ePixelFormat;
#endif

and in my inits (for FBO actually):

#ifdef USE_GLES_TEXTURES
    // OpenGL ES 1 or 2:
    unsigned int components =(unsigned int)( pixelFormat & 0x0000ffff);
    unsigned int format =(unsigned int) ((pixelFormat>>16)& 0x0000ffff);
    glTexImage2D(GL_TEXTURE_2D, ii, components ,
          pixelWidth, pixelHeight, 0, components, format, NULL);
#else
    // OpenGL 1,2 not ES
        glTexImage2D(GL_TEXTURE_2D, ii, pixelFormat ,  
          pixelWidth, pixelHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
#endif

…Then by using the ePixelFormat enum in my code, I can stay compatible on desktop and mobile.

Note something special about texture size:  Textures and FBO can be any pixel size on OpenGL ES and ES2, with all implementations: no need to test for extensions, NPOT (Non power Of Two size) is mandatory on ES (and for ES2, GL’s extension list will not display NPOT just because it doesn’t have to, but it’s here.)

Last word about texture formats: NVidia Tegra allows to have internal float16 RGBA textures (not float32 or else) and to write them as float16 with a FBO and a pixel shader, so you can do some nice “General purpose GPU” (GPGPU) tricks on it. As far as I know, a lot of other GPU declare float texture format extensions but it’s only the loading format. they all are restricted to 8 bit FBO internally. (tested last time in 2014)

With OpenGLES2 (like OpenGL 3 desktop)  No more glColor function, no more matrix management,…

For both Mobile ES and desktop OpenGL, a “2” or greater version number means it is a Shader-based architecture that needs a vertex Shader and pixel Shader to draw something.
If you want to use the real power of your GPU, you have to use shaders.
And if you want shaders on mobile, you’ll have more code to do than with OpenGL 1.

Basically in OpenGL ES2 and OpenGL3, every GL functions that could be also done with shaders using using “attributes” and “uniform”, were removed.
So every glMatrix functions were removed,  you have to declare your own uniform matrix and apply them your own way in the vertex shader. It implies you have your own translate, rotate, multiply, etc matrix implementations… extra SDK often offer their (NVIDIA Tegra SDK, any open source engine,…) I did my own. It’s always useful to patch 4×4 float matrices in a way or another.

OpenGL ES2 GLSL Shaders are a bit differents compared to classic GLSL.

First some simple rules:
a. Your Mobile GPU is far more powerfull than the ARM FPU: It’s always better to have computation in your Vertex Shader rather than in the CPU Float code. Think about it twice, you can have “quite big vertex shaders”, there is no problems.

b. You can’t read textures in vertex shader under OpenGL ES 2. Arggg I hate that. It actually would have been the coolest thing ever, some (rare) mobile GPU and drivers actually does it, but you want to be compatible.

Then the only real serious thing to know about GLSL for mobile:
You must start every Shader with one of these lines:

 precision lowp float;
 precision mediump float;
 precision highp float;

This precision thing only exists for ES shader and is mandatory.
It allows to choose if the GPU would work with 8b float16 or float32
It is meant to be powerful, because lowp would be enough for copying 8bit textures and highp would allow more nice rendering.
This is the default main precision of your shader, but you can then declare for each variable any lowp, mediump or highp precision. Theorically, it will finetune the compilation of the shader. But complex shading will need highp to really work. Some glitches are sometimes caused by mediump.

The Dreaded Android Context loss and other Video Memory management issues…

Even if modern Mobile hardware got huge memories, Androids and other systems are still widely ” One App Focused at a time” based. Before Android 3, Every OpenGL app had to completely free all textures, FBO and VBO each times the system was paused, and you had to re-init all of these when the App restart. Even the screen-rotation-with-sensor implied to do all of that, and for a game with many textures and FBO, it was a hell, not mentioning you are meant to manage GL in a special thread… that the main thread will actually kill your context *before* sending the message on the other thread. Android “Activity” class has special messages about it.  Hopefully, Android 3 introduced setPreserveEGLContextOnPause() which will just prevent that behavior… in most cases. Because yes: you still have to manage the “destroy and recreate the GL video memory, but you can keep the rest” thing.

and you know what ? it’s great to do that. I have 2 level of creation/ destruction messages in my whole engines , one for video memory and the other for CPU memory. As I can compile for destop and mobile GL, it also resolves the nasty windows and OSX bug in SDL that destroy the context when you change the size of the screen.

That’s all, have a nice code.

 

by krabob

Mastering multicore: Parallelizing any algorithms… and how my resource library helps

As in any big project, I use several program libraries for Math Touch Book.
I actually made a brand new Graph Theory API for it, to fit my needs. But I use another one I did between 2006 and 2009, to manage what is called application resources. This was a very interesting management library called Veda. In the case of Math Touch Book, I just re-used it to manage icons, screens, and messaging between objects. Thanks to Veda, If I need one more image for a button, I only add the image once and I have it both in the Android and iOS Apps, and for any future implementation. If I want to get rid of unused images or doubles, it is also automatized for all app versions.

Capture d’écran 2016-02-09 à 13.14.30
Managing external ressources with the Veda Lib in Math Touch Book. The interface is created automatically for any type of objects (images, fonts, sounds,…)

Veda is like a C++ patch where you add some declaration to the class you want to be managed. Then you have special serializable members (serializable means loadable/ savable/clonable). with basic data types of any kind, and one of the magic was special intelligent pointers that allowed to view your whole data as a graph with object links and dependencies. It was first made to experiment with “naked object” design pattern, which is about having interfaces to edit objects by just declaring objects, with no additional code. I also had some experiments with Java serialization, and my old xml based Amiga engine “Karate” was lacking automatic interfaces.
For your information, it is opensource and available here (and you even have Sega Dreamcast demos done with it !):

Veda source and binaries… mostly for windows xp

and Tons of Veda docs were “doxygened” here.

vedashotbig
The interface creator for Veda in 2008, showcasing some 3D. This was a Microsoft MFC implementation at the time. The current purple interfaces in other screenshots are the current SDL implementation.

As your whole objects could be managed as a graph, It was made possible to have incremental step-by step initializations for it: when You ask the construction phase of an object, It could need another linked object to be created first, and this second object could also be bound to a third ante-previously initialized object.

At the time veda was made, it had implementation for a 3D engine (that was drawn through an abstract engine, with 5 or 6 different implementations !) . In that case, I had a veda Script classes, which needed some 3D world, which needed 3d objects and cameras, which needed another 3D objects and 3D textures, which needed images of any kind.
But anything in computer science could be Veda-managed, not only 3D engines.

veda2
At left, the object list sorted by class. A blue rod meant “created”. editing a leaf object could lead to the de-creation of currently unused objects, dependant of it. The graph was used to manage object updates during edition. On the Second line of the edition deck, you can see an “intelligent pointer” to a texture. You could choose to preview an object with that GUI (here the torus) and to edit another one with the editor Deck. So you could finetune a texture, while watching the result on the object.

 All those objects were linked with intelligent pointers, and an algorithm was able to list all the objects in a leaf-to-root order, to assume you created the leaf objects and then only the dependent ones. A leaf object could be pointed many times, and any object could point another (exception was: you could not “ring” pointers), so it was really a Graph, not a tree.
So incremental initialisations were automatized, Interfaces were automatized, links and data were dynamics, whatever may be the size of the whole graph . My previous space video game “Earths of alioth” also used Veda to manage resources, and its nice progress bar at start, is done thanks to Veda.IMG_1207

At the time, I couldn’t cease to discover new advantages of Graph-theory managed resources in veda, that explains the never ending list of its features.

One of the biggest ideas came with the advent of multi core CPUs...
Since the mid-2000’s, we have efficient multi-core CPU, which means 2 or more programs can run at the same time, not only because of Preemption (which means: multi-tasking with one CPU), but because you really have multiple CPU working, which was impossible before the 2000’s, because of data cache and hardware memory issues: 2 CPU accessing the same memory would trash the memory.

Capture d’écran 2016-02-09 à 12.43.20
level design in “Earths Of Alioth” was edited with an automatized Veda interface !

But then, the problem was to create multi-core programming. in most cases, programmers didn’t change their habits: you use multi-core with long-known threads, and usually, a thread works on its own job, with its own data. For example, a thread would mix sounds alone, when a main thread would do something else.
In the case of very large computation to be performed using the same data, it is often hard to split an algorithm to make the CPU work in parallel. Because you have to be sure the data you read is coherent with what the other CPU are doing in the same time.

Basically, such an API like Veda could do that: for example, you can delegate the inits of whole branches of objects to other CPU threads when you know they are on a “separated branch”, and recover those objects to another thread when initialized.

To make it short and generalize it, knowing the graph of dependencies between objects, makes it possible to automatise the way multiple CPU can share their work with it: it’s just about some lines of Graph-theory algorithms to do that.

Thanks for your attention and have a nice day !

 

 

by krabob

A Simple yet Powerful Artificial Intelligence Algorithm.

One of the Key Word about Interfaces today is Responsive Design: For a Mobile App, interfaces not only have to be the simplest possible, they have to be clear, and they have to propose only awaited functionalities that users will understand: they had to be adaptative.

So not surprisingly, Math Touch Book displays nothing else than what you wrote on the screen. Then touching a Math expression unfolds a Menu that proposes a short list of features according to that expression.

Capture d’écran 2016-02-02 à 11.35.32

Adaptative Menus are not new, but here, the program must “think” to guess the list of functionalities, operations, modifiers: Expressions can be developed or not, Simplified or not, resolved or not. And notice, some of those features can take multiple passes, like Equation resolutions.
It might surprise you, but a bit of Artificial Intelligence (AI) intervenes at this level: The same basic AI algorithm is used, both to select the menu entries and to resolve equations.

With the advent of multi-threaded mobile CPU and GPUs this last decade, Robotics and Artificial intelligence are subject to permanent breakthrough. On the software side the AI subject becomes bigger every day. But large industrial AI programs for robots, that have to compute an impressive sum of data, and more little algorithms managing equations (like  MathTouch Book does) have things in common: they both manage their objects using Graph Theory, and they both create and test algorithms before applying them for real.

AIArticleGraphtheory

Let’s have a look at what Math Touch Book does when it resolves an equation:
First, Equations are defined by Graph Trees, where the root element is an equal sign.
Elements in a tree are all attached to another. That’s what Graph Theory is about and we, computer scientists, usually think it’s a nice way to manage data.

A good Graph theory framework should also have methods for cloning graphs, detaching branches of the tree, re-attach them elsewhere, and the ability to use one or another element allocators, amongst other things.

MathIAClassExtensions

Then we have some bricks here, that represents schematically classes available in the program, it’s a “Object Oriented” concept. This scheme tells us each of these classes are “GraphActions”, and each is able to modify a graph in their own way. They are available to be used by the Artificial Intelligence.

So now that we know the shape of the data, the tools to modify it, we want the program to find itself a way to resolve the equation. How ?

An equation resolution algorithm can be expressed as a list of “GraphActions”. But what GraphActions to use, in what order ?
And what happens if the program find a wrong algorithm ? the graph would be destroyed !
No, because we can clone the graph and test modifications on the clone.
we can also simply look at the shape of a graph, and decide to modify it in a way or another according to its shape. (You now understand the usefulness of having clone functions and special allocators in your graph framework.)
Written in pseudo-code, you can find a resolution algorithm like this:

class changeThisShapeAction extends GraphAction
class changeTHATShapeAction extends GraphAction
...

get_Action_List_For_Resolution( listOfActions , equationGraph )
{
    tempGraphAllocator // init an allocator to experiement

    // create a temporary clone to experiment
    cloneOfEquation = equationGraph.clone(tempGraphAllocator)

    // this loop search a shape that can be resolved
  while( cloneOfEquation has no resolvable shape )
  {
     if(cloneOfEquation has "this" shape)
     {
       didSomething = changeThisShapeAction.
         modifyGraph(tempGraphAllocator,cloneOfEquation);
       if(didSomething) listOfActions.add(changeThisShapeAction)
     }
     if(cloneOfEquation has THAT shape)
     {
       didSomething = changeTHATShapeAction.
        modifyGraph(tempGraphAllocator,cloneOfEquation);
       if(didSomething) listOfActions.add(changeTHATShapeAction)
     }
     (... here are a lots of modifier actions to test accordingly) 
 } // end of loop
 
 // here equation could be resolvable or not :
 if(cloneOfEquation first degree) listOfActions.add(finaliseFirstDegree)
 if(cloneOfEquation is quadratic) listOfActions.add(finaliseQuadratic)
 if(cloneOfEquation not resolvable) listOfActions.add(finaliseNoSolution)

// the magic here, is we just cancel everything done on the tempGraph...
 tempGraphAllocator.free()

// We return the exact list of modifiers that finds the solution.
 return listOfActions 
}

Acting innocently, with just one accumulator list, a loop and an object oriented set of classes, we have here an algorithm that … creates an algorithm. and it works !
The human brain actually does more or less this “imagine what is the most interesting thing to do, and then do it” game. The temporary allocated memory can be seen as a short-lived “imagination”.  Obviously the real code is a little bigger, but the main idea is here.

IAHead

To me, the greatest of all things is that, at this level, the algorithm is found, it is available for use, but you can or cannot use it inside the application; it’s up to the user, he may use another modifier, and in some way this “search what to do” Artificial Intelligence algorithm can also be seen as a recursive extension of the Natural Intelligence of the user, that decides, the same way, to use or not a feature. So it may demonstrates that Artificial intelligence and Natural Intelligence can cooperate.
In this perspective, one already can speak of Human Bananas.

banana-crop

 

 

by krabob

Post Media and the end of Static Culture in Digital Short Films.

  Years ago I was frequently terrifying my rare friends in bars with completely improvised and bizarre artistic manifestos. Here is a brand new one:

Come as you are.

M4nkind.com is originally a group of science students, artists and enthusiasts,   that has been involved in real-time experimental short film on computers since the beginning of the nineties, at a time a lot of things were happening in the underground computer scene through Europe.

Capture d’écran 2016-01-31 à 17.36.10 Early before 1990, Hackers started to explore experimental real-time effects on computers, with graphics and musics. This movement, rapidly spread, involved a lot of research, both technical and artistic.

Later then, there was a paradox you could read between the lines of those productions in the nineties, about nothing but the aim of these numerous short films: Capture d’écran 2016-01-31 à 17.36.46
Was it Technical or Artistic ? Was it Technical demonstrations through Art or Art through
Technical demonstrations ? Or both ? Should Short computer experimental films be made for everyone, or for a finite number of amateurs ?

There were occasional discussions about it, that did not prevent the movement to evolve and follow technical advances. On the Art side, it is noticeable that a lot of different shapes, references to one or another culture, sequencCapture d’écran 2016-01-31 à 17.56.33e montage techniques, experimentations with sound and image symbiosis, exploded during the nineties in these movies, like nowhere else.

 

Then came the word.

Aside that, Industrial computer science, and marketing, were also going their own way during that period. The advent of the CD Rom, around 1994, was sold with a new Word, that would solely define the links to come between Art and Computer Science: Multimedia.multimerdia
Basically, your computer would be “Multimedia” because it was not anymore that sparkling grey brick lost in a corner of your room. You could have images and sounds now. It was officially allowed to do Art on Computers, thanks for that word.
The problem was, a lot of people were actually doing exactly that since 10 years, and yet that Art was not recognized as valuable, except rare exceptions. And the word didn’t help.

“Hello, I want to buy a Multimedia Computer.”

Capture d’écran 2016-01-31 à 17.37.45You could read the word “Multimedia” during years on ads and brochure, until the mid 2000’s: the fashion more or less lasted a decade. At that time, kids were already raised with internet, and the adults had more frequently computer knowledge and computer culture, not mentioning all media becoming digital.
This, with the advent of video servers being the final nail in the coffin, changed completely the way people used to see computer-for-computer Art, by flattening every symbols, every cultures, to the same level. Whatever you were doing, music with spoons, knitting bags, reviving forgotten musical genres, or Programming effects on a 30 year old 8 bit computers, all these suddenly became OK and could take recognitions.Capture d’écran 2016-01-31 à 17.40.23

The story of Pixel Art is very relevant of this:
Pixel artists used to live by their work in the eighties, drawing characters for video games. It was a difficult art, involving a large set of techniques, they had to be graphic designer, animators, and they had to deal with technical constraints in size, colors, and memory weight. The few people actually able to do that were then praised by their employers.
It all changed in the middle of the nineties. People wanted 3D games and Pixel Art became obsolete quite suddenly. For a long time. Until the Post-Media era.

Paradoxically, labelling a culture leads to restricting that culture. A dead culture is not the one you haven’t heard about, but more likely the one no-one practices, the one that does not take elements from other cultures.

“Hello, I want to buy a Post Media Computer”

Recently the Art world, Gallery Owners and Critics, for the first time in history, arrived at the sampostmediabooke analysis about culture and state of the Art as could do an independent underground demomaker, and even more, using the same words: Post Media. Signs of the time: pixel Artists like eBoy are nowadays praised in galleries. Displaying a background of underground computer geek and experimenter does not scare anymore, quite the reverse, it opens doors.
So what art can you do in an environment where everything is possible, where everything is open ? Isn’t it the time to deeply reshape the Frames of art, mix shapes, colors, sounds, in ways they were never shaped before ? Isn’t it the time to really flatten the field of possibilities before your eyes ?
I think so, and I invite you to do so.

  •  The old stuff from m4nkind is visible on this very site: http://www.m4nkind.com
  •   pouet.net references more than 62000 Computer Realtime Digital Short Films made between 1980 and today in the world.
  •  http://demoscene.tv permanently broadcasts democene videos since 2005.

 

 

 

by krabob

A First Person Shooter in Spherical Geometry

FPS (First Person Shooters) have become very popular and never ceased to improve since twenty years. They propose players to explore worlds in three dimensions, in a very simple and intuitive way. In this constantly renewed kind of game, using the same rules and codes, original variations sometimes appear:

sphericalgame

In “Earths Of Alioth”,  released in 2015, Space has become Spherical: the notion of high and low is completely gone: and for good reason, we are on satellites orbiting distant exoplanets in a fairly realistic space environment.

However, most FPS rules are present. Only the movement mode changes: You must jump from satellite to satellite, pursuing drones, also in orbit. The issue of space management, essential for the player, is thereby upset: How to move toward or away from another point of space? How to find the shortest path from one point to another by jumping on the satellites, themselves in movements? These issues are more likely common to a Space Agency, but they find immediate intuitive responses in the game.Capture d’écran du Simulateur iOS 22 avr. 2014 11.47.23

According to us, these intuitions are natural because there are links between two-dimensional geometry, three-dimensional, and spherical geometry, the latter sharing mathematical properties with the previous two.
Video games have experienced a two-dimensional era, then a three-dimensional, it makes sense that more “Spherical Games” make their appearances.  Today’s Haptic screens and tablets fit that perfectly.
“Earths Of Alioth” is available on iTunes App Store and Google Play.

The app was not found in the store. 🙁 #wpappbox

Links: → Visit Store → Search Google

The app was not found in the store. 🙁 #wpappbox

Links: → Visit Store → Search Google

by krabob

Why did I start MathTouchBook ?

I started Math Touch Book after I did my first video game on mobile, one year ago.
I was researching new kinds of interfaces on tablets, and also sometimes I needed to  do some maths on a rough book.
I slowly realized that the equations and functions you usually draw there could gain a lot if it was managed on a haptic screen:

mtbdoigt
When you do math on a paper, you often write the first shape of an expression, then you write a second modified shape, then a third,… you constantly invert, simplify, develop, factorise what you write. Consequently, you can hardly imagine how big the result will be on the page, and mathematicians spend their lives struggling with page layouts.
Fortunately, Equations can also be seen in “four dimensions”, transforming through time.
Clearly, a mobile application could manage that aspect.

I think I could be able to program an engine using graphical processing units (gpu) that would draw equations in a vectorial space, and would be able to do morphing between those shapes.

Doing my previous video game, I actually made a Glyph Engine flexible enough to do that. Then a bit of tricks into the graph theory paradigm would be enough to make it a useful living mathematical tool.
post1blog
A second important aspect of such a tool would be automatic alignment. One of the key for learning mathematics is to write clear, correctly aligned expressions. Automatizing this could help newcomers. Plus, every math tools force the users to type one line expressions with a lot of parentheses that are hardly understandable. A vectorial space could avoid that.

Another idea came while I was reading a book about the life of Richard Feynman. He told how he could see equations in colors when he was a young boy, and how it helped him to learn.  Synestesy, Something great and easy to implement.

At this point, I knew I could make a tool within a year of work, with a quite precise workflow. So the development began.