Our dear Krabob have just released Math Touch Book on iOS and Android.
Get it now !
Links: â Visit Store â Search Google
Links: â Visit Store â Search Google
And now… MATHS !!!
Our dear Krabob have just released Math Touch Book on iOS and Android.
Get it now !
As in any big project, I use several program libraries for Math Touch Book.
I actually made a brand new Graph Theory API for it, to fit my needs. But I use another one I did between 2006 and 2009, to manage what is called application resources. This was a very interesting management library called Veda. In the case of Math Touch Book, I just re-used it to manage icons, screens, and messaging between objects. Thanks to Veda, If I need one more image for a button, I only add the image once and I have it both in the Android and iOS Apps, and for any future implementation. If I want to get rid of unused images or doubles, it is also automatized for all app versions.
Veda is like a C++ patch where you add some declaration to the class you want to be managed. Then you have special serializable members (serializable means loadable/ savable/clonable). with basic data types of any kind, and one of the magic was special intelligent pointers that allowed to view your whole data as a graph with object links and dependencies. It was first made to experiment with “naked object” design pattern, which is about having interfaces to edit objects by just declaring objects, with no additional code. I also had some experiments with Java serialization, and my old xml based Amiga engine “Karate” was lacking automatic interfaces.
For your information, it is opensource and available here (and you even have Sega Dreamcast demos done with it !):
Veda source and binaries… mostly for windows xp
and Tons of Veda docs were “doxygened” here.
As your whole objects could be managed as a graph, It was made possible to have incremental step-by step initializations for it: when You ask the construction phase of an object, It could need another linked object to be created first, and this second object could also be bound to a third ante-previously initialized object.
At the time veda was made, it had implementation for a 3D engine (that was drawn through an abstract engine, with 5 or 6 different implementations !) . In that case, I had a veda Script classes, which needed some 3D world, which needed 3d objects and cameras, which needed another 3D objects and 3D textures, which needed images of any kind.
But anything in computer science could be Veda-managed, not only 3D engines.
 All those objects were linked with intelligent pointers, and an algorithm was able to list all the objects in a leaf-to-root order, to assume you created the leaf objects and then only the dependent ones. A leaf object could be pointed many times, and any object could point another (exception was: you could not “ring” pointers), so it was really a Graph, not a tree.
So incremental initialisations were automatized, Interfaces were automatized, links and data were dynamics, whatever may be the size of the whole graph . My previous space video game “Earths of alioth” also used Veda to manage resources, and its nice progress bar at start, is done thanks to Veda.
At the time, I couldn’t cease to discover new advantages of Graph-theory managed resources in veda, that explains the never ending list of its features.
One of the biggest ideas came with the advent of multi core CPUs...
Since the mid-2000’s, we have efficient multi-core CPU, which means 2 or more programs can run at the same time, not only because of Preemption (which means: multi-tasking with one CPU), but because you really have multiple CPU working, which was impossible before the 2000’s, because of data cache and hardware memory issues: 2 CPU accessing the same memory would trash the memory.
But then, the problem was to create multi-core programming. in most cases, programmers didn’t change their habits: you use multi-core with long-known threads, and usually, a thread works on its own job, with its own data. For example, a thread would mix sounds alone, when a main thread would do something else.
In the case of very large computation to be performed using the same data, it is often hard to split an algorithm to make the CPU work in parallel. Because you have to be sure the data you read is coherent with what the other CPU are doing in the same time.
Basically, such an API like Veda could do that: for example, you can delegate the inits of whole branches of objects to other CPU threads when you know they are on a “separated branch”, and recover those objects to another thread when initialized.
To make it short and generalize it, knowing the graph of dependencies between objects, makes it possible to automatise the way multiple CPU can share their work with it: it’s just about some lines of Graph-theory algorithms to do that.
Thanks for your attention and have a nice day !
One of the Key Word about Interfaces today is Responsive Design: For a Mobile App, interfaces not only have to be the simplest possible, they have to be clear, and they have to propose only awaited functionalities that users will understand: they had to be adaptative.
So not surprisingly, Math Touch Book displays nothing else than what you wrote on the screen. Then touching a Math expression unfolds a Menu that proposes a short list of features according to that expression.
Adaptative Menus are not new, but here, the program must âthinkâ to guess the list of functionalities, operations, modifiers: Expressions can be developed or not, Simplified or not, resolved or not. And notice, some of those features can take multiple passes, like Equation resolutions.
It might surprise you, but a bit of Artificial Intelligence (AI) intervenes at this level: The same basic AI algorithm is used, both to select the menu entries and to resolve equations.
With the advent of multi-threaded mobile CPU and GPUs this last decade, Robotics and Artificial intelligence are subject to permanent breakthrough. On the software side the AI subject becomes bigger every day. But large industrial AI programs for robots, that have to compute an impressive sum of data, and more little algorithms managing equations (like MathTouch Book does) have things in common: they both manage their objects using Graph Theory, and they both create and test algorithms before applying them for real.
Letâs have a look at what Math Touch Book does when it resolves an equation:
First, Equations are defined by Graph Trees, where the root element is an equal sign.
Elements in a tree are all attached to another. Thatâs what Graph Theory is about and we, computer scientists, usually think itâs a nice way to manage data.
A good Graph theory framework should also have methods for cloning graphs, detaching branches of the tree, re-attach them elsewhere, and the ability to use one or another element allocators, amongst other things.
Then we have some bricks here, that represents schematically classes available in the program, itâs a âObject Orientedâ concept. This scheme tells us each of these classes are âGraphActionsâ, and each is able to modify a graph in their own way. They are available to be used by the Artificial Intelligence.
So now that we know the shape of the data, the tools to modify it, we want the program to find itself a way to resolve the equation. How ?
An equation resolution algorithm can be expressed as a list of âGraphActionsâ. But what GraphActions to use, in what order ?
And what happens if the program find a wrong algorithm ? the graph would be destroyed !
No, because we can clone the graph and test modifications on the clone.
we can also simply look at the shape of a graph, and decide to modify it in a way or another according to its shape. (You now understand the usefulness of having clone functions and special allocators in your graph framework.)
Written in pseudo-code, you can find a resolution algorithm like this:
class changeThisShapeAction extends GraphAction class changeTHATShapeAction extends GraphAction ... get_Action_List_For_Resolution( listOfActions , equationGraph ) { tempGraphAllocator // init an allocator to experiement // create a temporary clone to experiment cloneOfEquation = equationGraph.clone(tempGraphAllocator) // this loop search a shape that can be resolved while( cloneOfEquation has no resolvable shape ) { if(cloneOfEquation has "this" shape) { didSomething = changeThisShapeAction. modifyGraph(tempGraphAllocator,cloneOfEquation); if(didSomething) listOfActions.add(changeThisShapeAction) } if(cloneOfEquation has THAT shape) { didSomething = changeTHATShapeAction. modifyGraph(tempGraphAllocator,cloneOfEquation); if(didSomething) listOfActions.add(changeTHATShapeAction) } (... here are a lots of modifier actions to test accordingly) } // end of loop // here equation could be resolvable or not : if(cloneOfEquation first degree) listOfActions.add(finaliseFirstDegree) if(cloneOfEquation is quadratic) listOfActions.add(finaliseQuadratic) if(cloneOfEquation not resolvable) listOfActions.add(finaliseNoSolution) // the magic here, is we just cancel everything done on the tempGraph... tempGraphAllocator.free() // We return the exact list of modifiers that finds the solution. return listOfActions }
Acting innocently, with just one accumulator list, a loop and an object oriented set of classes, we have here an algorithm that ⌠creates an algorithm. and it works !
The human brain actually does more or less this âimagine what is the most interesting thing to do, and then do itâ game. The temporary allocated memory can be seen as a short-lived âimaginationâ. Obviously the real code is a little bigger, but the main idea is here.
To me, the greatest of all things is that, at this level, the algorithm is found, it is available for use, but you can or cannot use it inside the application; itâs up to the user, he may use another modifier, and in some way this âsearch what to doâ Artificial Intelligence algorithm can also be seen as a recursive extension of the Natural Intelligence of the user, that decides, the same way, to use or not a feature. So it may demonstrates that Artificial intelligence and Natural Intelligence can cooperate.
In this perspective, one already can speak of Human Bananas.
I started Math Touch Book after I did my first video game on mobile, one year ago.
I was researching new kinds of interfaces on tablets, and also sometimes I needed to do some maths on a rough book.
I slowly realized that the equations and functions you usually draw there could gain a lot if it was managed on a haptic screen:
When you do math on a paper, you often write the first shape of an expression, then you write a second modified shape, then a third,… you constantly invert, simplify, develop, factorise what you write. Consequently, you can hardly imagine how big the result will be on the page, and mathematicians spend their lives struggling with page layouts.
Fortunately, Equations can also be seen in “four dimensions”, transforming through time.
Clearly, a mobile application could manage that aspect.
I think I could be able to program an engine using graphical processing units (gpu) that would draw equations in a vectorial space, and would be able to do morphing between those shapes.
Doing my previous video game, I actually made a Glyph Engine flexible enough to do that. Then a bit of tricks into the graph theory paradigm would be enough to make it a useful living mathematical tool.
A second important aspect of such a tool would be automatic alignment. One of the key for learning mathematics is to write clear, correctly aligned expressions. Automatizing this could help newcomers. Plus, every math tools force the users to type one line expressions with a lot of parentheses that are hardly understandable. A vectorial space could avoid that.
Another idea came while I was reading a book about the life of Richard Feynman. He told how he could see equations in colors when he was a young boy, and how it helped him to learn. Synestesy, Something great and easy to implement.
At this point, I knew I could make a tool within a year of work, with a quite precise workflow. So the development began.