The Cell Processor - A new Software Paradigm

Posted by Kaya Kupferschmidt • Friday, February 4. 2005 • Category: Programming
There have been a lot of articles on the new Cell Architecture the last couple of weeks. There are still many facts unknown and the predicted performance still seems overwehlming, even if it was only half as huge as speculated.

I think the whole Cell Architecture contains some very interesting concepts aside from hardware details, the most outstanding in my eyes being the concept of the so called apulets. Apulets are small programs that will be sent to a Cell Processor and an apulet contains everything needed to be processed, it contains the code and the data. This is a very different approach to an object oriented hardware design, although an apulet does not necessarily directly correspond to an object as defined in a programming language.

But I really like the new idea to mix data and code, as long as the code is standairzed and portable among different Cell implementations and operating systems. This would allow radically new possibilities for audio and video codecs if properly employed. Image that with Cell processors one would only need some sort of meta codec for videos, that contains the usual information found in digital media files, like the videos size in pixels, the number of audio channels plus the encoded data. But as an addition it also would contain the complete decoder as a small Cell program. This would render the installation of the correct Codec and updates obsolete, as each media file comes with its own decoder. Of course such a thing could be already possible with the current hardware architecture, but the problem is that it would not be portable among different hardware platforms. But granted that the Cell processor will have a standardized instruction set, every device containing a Cell processor would be able to load and execute the software decoder embedded in the video file.

No more need to install the latest version of the DivX decoder or find a more subtle decoder for a specific video on the net. All neccesary information how to decode the media file would be embedded in the file itself.

As the Cell architecture already defines the new term "apulet", this would only be a logical consequence.

Kaya

2 Comments

Display comments as (Linear | Threaded)
  1. *The idea of including the codec with the media file is interesting, but I wonder if the Cell processor will make it possible?

    First of all, I must admit I have not read a whole lot about the cell processor, but I imagine it to be something similar to a more generalized vertex processor added to the processor (?).

    An apulet, would it not work with streams of data? I don't quite see how they could give the cell processor access to the full main memory - more likely it will work with streams of data like GPUs does today (or am I wrong)?

    This would probably mean that it will be impossible to access the data of another position within the stream (just like you cannot access the other vertices from within a vertex program).

    As long as you have that restriction, you cannot make the codec truly general, since you must decide what data to include. Assume for instance that the codec uses the nearby pixels to calculate the value of the current one. Or is it perhaps the pixels of the previous frame? Or is it the results of some statistical analysis?

    If the apulet truly works with large allocated blocks of memory - then the kind of format you proposed here is truly possible and we will probably see one soon.

  2. *That's an interesting point you mention here. As far as I understand the Cell Processor really is something like an advanced vertex processor like the one found in the Geforce 6 series. But I guess it will have some more advanced flow-control.

    Concerning the memory access, I really don't know if a processing unit can directly access the wohle main memory. But each unit does have some local memory (about 8MB If I remeber correctly) which can be accessed randomily by the unit (in analogy to a texture I guess).

    But I think that as most audio/video codecs are inherently stream oriented. Of course there must be some code running on the normal CPU in my new model, but this code would only have to take care of the container format itself, i.e. it would have to load the correct data from the media file and pass it to the cell processor.

    On the other hand as the media formats currently are defined by other companies than IBM,Sony and Toshiba, I guess we won't see such an implementation, at least not with broad support from the important companies (Apple and Microsoft come to my mind).

    Kaya

Add Comment


Enclosing asterisks marks text as bold (*word*), underscore are made via _word_.
Standard emoticons like :-) and ;-) are converted to images.

To prevent automated Bots from commentspamming, please enter the string you see in the image below in the appropriate input box. Your comment will only be submitted if the strings match. Please ensure that your browser supports and accepts cookies, or your comment cannot be verified correctly.
CAPTCHA 1CAPTCHA 2CAPTCHA 3CAPTCHA 4CAPTCHA 5


Markdown format allowed



A Simple Sidebar