GrandVJ 1.2 beta 1 supports Flash AS 3, Quartz Composer, APC 40 and many other …

We just released a new beta version of GrandVJ, you can download it from our forums:

http://www.arkaos.net/forum/viewtopic.php?f=26&t=6828

This version has many new cool things which are detailed in the forum post, here is a recap:

  • New ultra smooth engine
  • Support for Akai APC40 & Generic midi feedback
    (also for Behringer BCR2000/BCF2000, Livid OHM, ..)
  • Quartz composer player (Mac OS X)
  • Support for Flash Actionscript 3
  • Cell layer assignment
  • Mixer state saving
  • Added master blackout button
  • New enhanced soft-edging algorithm
  • Added vertical/horizontal position presets for TripleHead2Go
  • Several Bug fixes.
  • New fullscreen option (PC)

We plan to ship the final version at the end of this month so we encourage you to check if this new version is working fine and to report any problems or suggestions by creating a topic in the GrandVJ forum.

Happy testing, we hope you will have as much fun using this version as we had to code it!

Easily converting an image into C++ / Obj C code..

If you need a way to convert simply a small image into C++ / Obj C code that can be compiled by GCC or any other modern compiler there are a few utilities out there. In my case I was not happy with the results, so I wrote a few lines of code to do this.

What I needed is a simple code that fill an array of char that can then be used by OpenGL or custom pixel processing code. It’s so easy to load an image into an NSBitmapImageRep. Once you have this there are a few accessor like pixelsWide, bitmapData … that lets you play with the pixels and it’s then very easy to generate some text that will be C++ / Obj C compliant.

Here is a small 16 by 16 image converted into code with my utility:

int gImageWidth = 16;
int gImageHeight = 16;
int gImageBits = 24;
unsigned char gImagePixels[] = {
// line 0
0xf2,0xf1,0xf1,0xf1,0xf1,0xf1,0xf1,0xf1,0xf1,0xf1,0xf1,0xf1,
0xf1,0xf1,0xf1,0xf0,0xf0,0xf0,0xf0,0xf0,0xf0,0xf0,0xf0,0xf0,
0xf0,0xf0,0xf0,0xf0,0xf0,0xf0,0xef,0xf0,0xf0,0xef,0xef,0xef,
0xef,0xef,0xef,0xef,0xef,0xef,0xef,0xef,0xef,0xee,0xee,0xee,

// line 1
0xe5,0xe5,0xe5,0xe5,0xe5,0xe5,0xe4,0xe4,0xe4,0xe4,0xe4,0xe4,
0xe4,0xe4,0xe4,0xe4,0xe4,0xe4,0xe4,0xe3,0xe4,0xe3,0xe3,0xe3,
0xe3,0xe3,0xe3,0xe3,0xe3,0xe3,0xe3,0xe3,0xe3,0xe2,0xe2,0xe2,
0xe2,0xe2,0xe2,0xe2,0xe2,0xe2,0xe2,0xe2,0xe2,0xe1,0xe1,0xe1,

// line 2
0xd7,0xd7,0xd7,0xd7,0xd7,0xd7,0xd7,0xd7,0xd7,0xd6,0xd6,0xd6,
0xd6,0xd6,0xd6,0xd6,0xd6,0xd6,0xd6,0xd6,0xd6,0xd5,0xd5,0xd5,
0xd5,0xd5,0xd5,0xd5,0xd5,0xd5,0xd4,0xd5,0xd5,0xd4,0xd4,0xd4,
0xd4,0xd4,0xd4,0xd4,0xd4,0xd4,0xd3,0xd3,0xd4,0xd3,0xd3,0xd3,

And here is the code of the small command line utility:

#import Cocoa/Cocoa.h

int main (int argc, const char * argv[])
{
if (argc >=3)
{
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
NSString * src_file = [[NSString alloc] initWithFormat: @”%s”, argv[1]];
NSData * raw = [NSData dataWithContentsOfFile: src_file];

if (raw)
{
NSBitmapImageRep *rep = [NSBitmapImageRep imageRepWithData: raw];
if (rep)
{
FILE * out = fopen (argv[2], “w”);
if (out)
{
int width = [rep pixelsWide];
int height = [rep pixelsHigh];
int bits = [rep bitsPerPixel];
int rowbytes = [rep bytesPerRow];
unsigned char * src = [rep bitmapData];

fprintf (out, “int gImageWidth = %ld;\n”, width);
fprintf (out, “int gImageHeight = %ld;\n”, height);
fprintf (out, “int gImageBits = %ld;\n”, bits);
fprintf (out, “unsigned char gImagePixels[] = {\n”);
for (int v=0; v

ArKaos MediaMaster used at Australian Idol

A popular usage of ArKaos MediaMaster is on TV stages, there it’s very convenient to use it’s large library of content, up to 60.000, and drive both videos on the walls and the lighting from the same DMX console.

In this case there are 2 MacPro servers but you could also run MediaMaster on a windows server.

Here are some extracts of the show:

This is a remix I made from videos found on sharing web sites. You need QuickTime installed on your machine to view this video.

Why software genlock at 60 FPS does matter!

Since MediaMaster 1.1 we have revamped our video engine and particularly the synchronization and multi threading.

We now perform what can be called software genlock to ensure the best possible fluidity if your machine has a multi core processor. Genock is the action of locking the frequency of a media on a reference signal or clock. There is a nice description on that on wikipedia.

When the software must present a frame you can cut the work that has to be done in 3 parts : getting the video frames from the disk, uploading them to the graphic card and doing the composition / blending of the pixels for the presentation.

Because of the way disks are working and because the time that is required by a codec to convert the data from the disk into a decompressed frame is not constant this can create fluidity problems.

So at each new frame the software wakes up and start working sequentially on the 3 phases. In a classical real-time video processing application this will work like this:

Traditional video application

Traditional video application

This graph show an application trying to play a video loop encoded at 30 fps on a monitor running at 60 fps. In a perfect world the application should present each frames of the movie exactly twice.

There are 2 problems with the traditional way of doing the video processing:

  • The time base is synchronized to the clock of the computer and so there is a drift between the monitor frequency and the internal clock of the computer. This means that even if your computer would be extremely powerful you will see small hiccups once in a while.
  • When the software start working to display a new frame it has only in this example 1/60th of second to read a video frame, upload it and present it to the user. The available time is dependent on the fps of the monitor and not on the fps of the video source. So the higher the fps of the monitor, the more stress you give to your systems.
  • The processing done by MediaMaster is much more elegant. Since 1.1 we have 3 modes, the original one, a buffered mode and a frame blending mode.

    In this article I focus on the buffered mode, I will write more about frame blending latter.

    So in buffered mode the graphic card has always one frame in advance ready to be composed. As soon as a frame has been processed and presented to the user the software read and upload the next video frame in advance.

    The other thing that is done in buffered mode is that the clock of the content is not taken anymore from the computer clock but rather from the monitor frequency. To my knowledge most other media players are not doing something as subtle as this.

    So if we keep the same timings as in the first example but simply shuffle them according to the way the buffered mode is working here is the result:

    Software genlocked video

    Software genlocked video

    Because the movie clock is genlocked to the monitor and because we have always one frame in advance we play the movie with a perfect regularity and timing : 1 1 2 2 3 3 4 4 … each frame is played exactly twice.

    If you are still not too sure why it’s nice to be genlocked at 60 fps here is a recording we did in real time from MediaMaster when running 2 layers. The lower one is at 30 fps and the top one is at 60 fps. The loop running at 60 fps is part of our test content, it’s a loop running a ramp that allows us to see visually if the system is perfectly genlocked. we capture the output with fraps.

    If you apply effects to the content you play, the effects will be rendered at 60 fps and this is why it’s so important to have a perfect synchronization, your eyes will have the feeling that everything is crisp and fluid.

    In order to see the videos in this article you need to have QuickTime installed and the first video should play smooth on a recent laptop or on a LCD monitor set at 60 fps.

    So here is the original screen grab that I just scalled down in order to show it in this article, the fps is still 60 fps:

    Now in order for you to be able to see the difference here is the same loop at 30 fps:

    I continue to lower down the fps and now we are at 20 fps:

    Here is the more degraded version at 15 fps:

    If you are curious to test this with any software video mixer that can play QuickTime movies here are our test files:

    Horizontal ramp, 2 seconds at 60 fps:

    Here is a vertical ramp at 60 fps:

    And a zooming rectangle at 60 fps:

    Those movies can be downloaded for you to test your systems, you just need a software that support QuickTime *.mov files. For best results you should loop those files and let them run for a while. In a VJ application you can simply add them as the latest layer of your composition with an addition blending and you will see if your system is powerful enough and well designed.

    Feel free to stress test our software and compare it with others, we think that we did very nice with MediaMaster and there is a demo version for you to test on the ArKaos web site.

    ArKaos MediaMaster driven by GrandMA through eDMX…

    We are hard working since 3 months on the next upgrade of MediaMaster.

    On thing that we want to show in this video tutorial is that we now can listen directly to GrandMA eDMX protocol. This allow a simpler networking between your console and a server running MediaMaster.

    Here is the tutorial I made to show how to make it happend:

    Music “I want to be a machine” by Pornophonique licensed from Jamendo PRO.