Physics Simulations

Posted by Kaya Kupferschmidt • Thursday, May 24. 2007 • Category: Programming
After I have been working with ODE in order to get an assembly simulation running, I began to dislike ODE for several reasons. The topmost reason is that ODE does not seem to handle large scales very well (even in the handbook it is noted that best is to scale everything between 0.1 and 10.0 - this was not a real option in my case).

So I was looking at different packages, but even the two big ones Havok and Agiea/Physx do not handle arbitrary triangle meshes very well (at least that is what I understand after reading their documentation) as collision geometry.

Fortunately finally I found Vortex, which is a high-end physics package geared towards simulation. And I have to say that so far I am really impressed with both speed and accuracy! Vortex is able to handle collisions between arbitrary meshes very well and fast. The collision response is quite good, if I tune the parameters (mass, forces, joints), I get almost not penetration between complex triangle objects - this is quite a difficult task.

Plus Vortex offers a lot more parameters to tune than other packages in order to get realistic and stable simulations. So if you are looking for most realistic results with complex shapes, Vortex seems to me the only way to go.

Sliding in VR with ODE, part 2

Posted by Kaya Kupferschmidt • Monday, April 2. 2007 • Category: Programming
After I started the integration of ODE into the immersive VR project, I stumbeled over a lot of difficulties, with some of them being still not properly solved:

  • Collisions between arbitrary triangle meshes is a complex thing. ODE needs as much information as possible for correct collision response. This include the contact point and normal (both are rather easy to get) and the penetration depth. The later is not trivially extracted from a collision, moreover it even is not simple to give a correct definition in the case of non-convex geometry. I had to work around this problem by approximation the penetration depth on a triangle-by-triangle basis, but my approximation still can return much too big values, so I finally had to clamp the result.

  • The connection of the virtual object and the tracked hand of the user is now realized by a fixed joint between two ODE bodies (one body for the hand ("user body") and one body for the object ("object body"). Out naive approach simply repositions the user controlled body in each frame and hopes that ODE will try to move the connected object body to the user body while trying to obey to collisions. This went well in some sense, but as soon as there are some collisions, the distance between both objects has been increased and remained even after the object has been moved out of collision again. Debugging the code showed that the user body has accumulated insane velocities in order to resolve the fixed joint constraint, and these velocities haven't been reset after collision. So I had to insert a linear and an angular motor with target speed of zero in order to slow down the objects. Still this is not an ideal solution, it would be much better to use motors in the first place to control the virtual object.

  • ODE seems to be very sensitive to scaling. The ODE manual states that best results are achieved if all values are in the range 0.1 to 10.0. But we have models in milimeters (cars) and models with a meter scale (airplanes). ODE seems to be especially sensitive to different masses, it turned out that a proportional increase of all masses resulted in ignorance of all collisions.


Still there are a lot of open topics, and I was only able to achieve some sliding (the user moves a virtual object, which obeys to collisions with a static environment) in a artificial toy environment. Maybe we will try a different physics library, but my guess is that the biggest problem is the lack of proper penetration depth information.

Collision Detection for Sliding Simulations in VR

Posted by Kaya Kupferschmidt • Wednesday, March 28. 2007 • Category: Programming
Currently I have to develop a robust method for sliding simulation in a immersive VR environment. This means that the user in a CAVE should be able to move objects around, but these movements should be resitricted by collisions with a static environment. Unsurprisingly this task turns out to be non-trivial. We chose to integrate ODE as a physics simulation backend combined with our own collision engine originally developed by Gabriel Zachman.

There are two obvious problems:
  • The virtual body moved by the user eventually has to be moved by ODE. This means that I had to extract the forces needed to move the object as desired by the user and pass them to the physics engine. This has been rather easy once I understood what the words torque and intertia tensor mean (both are needed for rotational movements).

  • The more complex problem is the integration of the collision engine. The integration itself was straight forward, but the real problem is, that as soon as a collision is detected, the simulation gets out of control. The reason is that what we really would need is a penetration depth or we have to try to approximate the exct time of the first collision between two bodies. As our collision engine does not offer the penetration depth, I have to go down the second road and approximate the time of collision by progressively subdividing timesteps in case of a collision.


Interestingly I found out that commercial physics packages seem to employ much more advanced collision algorithms which can calculate the penetration depth or work in a continious mode and thus calculate the exact time of the first collision. Plus many games use simplified collision geometry and special bodies (spheres, cylinders, boxes) which make such calculations much more easy, while we have to cope with arbitrary high-resolution triangle meshes.

While looking for solutions on the net, I found two good PhD thesises on physics simulations:

Easy Bytecode

Posted by Kaya Kupferschmidt • Wednesday, February 28. 2007 • Category: Programming
Writing a interpreter for a custom scripting language always seems to be more complex than writing a small bytecode compiler and bytecode interpreter. At a first glance, writing a direct interpreter might be easier, but if the scripting language contains flow control (like loops, if/else statements and similar constructs involving jumps), this turns out to be false. The primary problem lies in the fact that one needs to duplicate large parts of the parser - simply for skipping over the parts of a script that are not executed (like with a conditional if-block whose conditioon turns out to be false at runtime).

Because of this insight, I began to concentrate on writing an easy-to-implement bytecode compiler that would transform a text-based script into a more machine-friendly representation. A positive side-effect of bytecode is that simple fact that it is much faster to execute than the original textual representation. The downside of this approach is the fact that it involves writing a compiler - something that sounds to be a complex and difficult task.

But after analysing the process of parsing a script, I came to the conclusion that such a compiler and its corresponding bytecode interpreter ("virtual machine") would be rather straight-forward and easy to implement, if the underlying model of the virtual machine is chosen carefully. In my opinion the best machine model (in terms of simplicity to implement a compiler and interpreter) it a purely stack-based RPN (Reverse Polish notation) machine. And such a model not only is easy to implement, but it also easy to extract the original syntax tree from the bytecode, which in turn allows further optimisation techniques as a postprocessing step (it even wouldn't be to hard to turn bytecode into native assembler).

Continue reading "Easy Bytecode"

Reflection for C++

Posted by Kaya Kupferschmidt • Monday, February 26. 2007 • Category: C++
One hot topic I am currently busy in, is reflection for C++. Reflection for a computer language means that you can access all types toegther with their methods and members at runtime using a simple string-based interface. Such a feature especially simplifies binding scripting-languages to a program by providing one wrapper that operates on the reflection information instead of binding each class, method or function by hand. Other possible usages are remote-method-invocations, serialisation, XML based configurations etc.

C++ offers only some very basic runtime type information (RTTI) out of the box and lacks full reflection. There are some projects on the net that try to close this gap (most notably the Reflex framework), but none of them really seem to be as powerful, flexible and easy-to-use as their native counterparts in Java or C#.

Dimajix's framework Magnum soon will contain some new modules that try to fill this gap, by offering the following tools:

  • A generic Meta-Compiler based upon gccxml together with a specialised XML-based transformation language.

  • Non-intrusive, full reflection for all public elements of any C++ program given in source.

  • A Java-like scripting language built on top of the reflection together with a custom bytecode compiler.


With these tools, one can easily add generic scripting capabilities to any C++ program with only some little one-time effort. Plus it is possible to use the Metacompiler for automatically transforming any type-information given in C++ headers in any kind of text-based files, by providing a set of transformation rules which will be applied to the C++ metainformation generated with gccxml.

The new package is not completely finished yet, there are some features still missing, but work is steadily progressing. For a preview, you can simply check out the latest version of Magnum using Subversion at svn://subversion.dimajix.de/magnum.

Status of Magnum

Posted by Kaya Kupferschmidt • Saturday, January 13. 2007 • Category: C++
A short intermission with a status report of Magnum. Finally I put up a public SVN repository which can be reached under svn://dimajix.de/magnum. The repository has public read access for everyone and is synched with my private development repository every night, so it is always almost up-to-date.

The most important feature that will be included in the next release version is full reflection. This means that one can access the complete type information including classes, structs, unions, enums, methods, functions, fields and variables via a dynamic interface at runtime. You can even invoke any method or create new instances of arbitrary objects. This should make it rather easy to make up a small scripting language which can access all types defined in Magnum.

This is a really huge undertaking, but most of the really hard stuff is already working (you can download the newest version via Subversion, as indicated above.). Still missing is some intelligent automatic argument casting for method-invocation (so you do not have to care about the exact needed types), some fixes for non-public class members and - most importantly - the integration into the build system.

The really special about my implementation of the reflection is that you will not need to modify your source-code (at least I try really hard to avoid the need of changes to the source code for which reflection is to be generated.) to make your classes accessible via reflection. The meta-compiler, which reads in all headers and creates some cpp files containing the needed runtime information for reflection, is also outstanding in that it uses a generic XML-based language to transform the meta-information parsed from the original header files into a new text-based output file. The metacompiler is not tied to my implementation of reflection and can (and will) be used for many other automatic header-transformations. For example one could write an XML template that generates serialisation code for arbitrary classes (this actually shouldn't be too hard). Other possible uses include the generation of COM or .Net wrappers (or for any other scripting language), etc.

A Simple Sidebar