March 29, 2010

DIY Fog Chiller

I have owned a fog machine for many years now, but I had always wanted to make a fog chiller to get a crawling fog effect. However, due to the wind around my house in the fall, it makes it usually pointless for halloween, so until now I had held off on spending any money on it. But when low lying fog was needed for a youth play I was helping setup and run lighting for, I finally had the excuse I needed.

So before any details, a bit of physics… A simple fog machine works by vaporizing a fluid (usually made of mineral oil, glycol, or glycol and water mixture) into a heat exchanger, where the fluid is quickly vaporized. This means that fog is usually somewhat warm, as it does cool down quickly as it expands, similar to compressed air. However it is still warm enough that it always rises as it is released. To remedy this, a fog chiller is used to cool down the fog faster so it lays low and clings to the ground. Some simple examples can be found on youtube that use a metal pipe or plate and ice to cool down the fog, which to my surprise does not need a lot of ice to achieve the task. Though watching expensive professional fog machines run is rather impressive. [Check it out]

Still I wanted to do something more than a simple metal tube, so after purchasing the cheapest plastic bin I could find ($5) I gathered up some PVC tube, mesh window screen, a 120mm computer fan and a plastic sandwich bag.

As you can see from my pictures the mesh is bent and woven into a wave shape for a clear path the smoke can move through, this ensures maximum exposure to the ice is possible and allows the fog to fill the container. Next the mesh smoke path is placed between the PVC inlet and the outlet fan using the lip in the container to to hold it up. Ice is filled all around the mesh as well as in between the gaps between the smoke path. A variable voltage transformer and a remote controlled outlet is used to control power and the speed of the fan. It is pointless to leave the fan running all the time as it will melt the ice faster. Finally a plastic bag with a stiff edge is used as a simple laminar to help smooth and direct the fog to the ground for a gentle rolling effect. While I did not get a chance to take a video of it in action I have included a video of another person's fog chiller which produced the same effect. (AKA, It is not my home, and I do not own such a sketchy rug...)

I was quite happy with the results, the ice lasted and with plenty to spare, while sitting in-between uses for a good hour. And produced a good layer of fog for the two scenes that needed it. Interestingly enough a chiller seems to works better in cold air than warm, due to it keeping the fog cool longer, which is the opposite of what I assumed, as I thought it would be better to have warm ambient air to keep the fog as the cooler sinking air. Also the slower the fog is, the lower it will stay, so if you have a container to pump it into first before cooling it will help slow it, but I will save that for the v2. As you can see from my results, if you own or want a standard fog machine (quite cheap now a-days) you need to make a fog chiller, also don't buy a combined model as they only produce a slow steady output of fog, which is only is useful for an small indoor room.

March 24, 2010

Poking with ajax

I have played with Ajax before and even used it in a few projects, but never in a way that it was obvious.  With the next round of CADY site updates I have finally had a chance to put ajax to some real use. For the upcoming parent network, which is more or less a very simplified forum I am coding from scratch, I needed a simple and easy to use registration page for parents. Setting up a page is easy, making it point out peoples mistakes in a dynamic non refresh ways is not.

It is good HCI practice to limit any screen refreshes to just when locations are actually changing, it is what people expect. Where as if a user submit a form only to have it return and to say you missed something is not as intuitive, because the user has already moved on in their thought process.  This is where javascript and ajax usually come in.  By allowing the page to call remote content you can check form fields on changes and then update the user to missed fields or issues before they submit. You could extend the process and prevent even the submit if things are not correct, however this falls outside my current beliefs on how applications should handle user control.  If the user wants to submit the page with missing items even after being notified, let them, however have it fail the submit on the server side and then bring them back to the restored page with a notice of what they missed.  This forces the user to learn that finishing a form properly the first time is worth not having to remember what they where filling out.

You can check out the CADY Parent Network Registration Page to see my progress and fancy ajax trimmings, however please do not sign up unless you are actually a parent who is interested.

March 9, 2010

Growth Issues,SN Java BufferedImage Transition

With my SN Project slowly coming closer to completion, I decided that I would try out making a simple game to see what needs work, however like always, my plans where derailed quite quickly. I had a small library of images that was working quite well at the time, however after increasing the library to a couple thousand images, I quickly noticed that things where not working quite right, well… actually not at all, it was throwing a java.lang.OutOfMemoryError and crashing. The memory error that as an easy fix, adding a VMOptions tag to the info.plist with -Xmx1024m allowed for me to set the max memory to a much greater amount, however this was nothing more than a temporary fix, I knew there was a much greater issue behind this problem.

The size of the image library was around 150Mb, which when loaded to memory would be larger due to it being uncompressed, however it should not have been much larger than 2X the base size or 300Mb as an upper limit. I was shocked when memory needed skyrocketed to well over 700 Mb, whereas the standard max for the Java VM is around 100Mb. So I went looking for a memory leak. Java handles most memory issues, but it still can have problems with large collections of data gathered in a short period of time, due to the built in garbage collection running only occasionally as needed. After doing some research via Google, I stumbled upon a complaint about the MediaTracker keeping references to the images it tracks, which while in small numbers is not an issue, but can quickly build up as more images are tracked. This was exactly what I was doing wrong.

The standard Image class in java does not load the image into memory right away, instead it acts as a reference until it is needed. This behavior could be seen when removing the method calls to add images to a MediaTracker, the memory used would only increase to around 50Mb and slowly grow as images where loaded when needed. The problem with this is that it causes an issues with flickering in animations. When each frame is used for the first time, it actually draws it on the second call after the first call forces it to be loaded into memory. The standard practice at the time of Java 1.3 to 1.4 was to use a MediaTracker to force the images to load and for my initial use worked quite well. However my project has finally grown beyond the simple use of a MediaTracker. The next step I took was to try some suggestions on limiting the memory retention by removing images from the tracker when they are loaded, or by using separate trackers for each image as well as forcing garbage collection with System.gc(). A mix of these solutions did lower my memory use, however as usual the trade off for space was time and this setup slowed my image loading algorithm to a crawl.

It was quite apparent that I needed something better, and after a bit more research I decided to transition my SN Project from using the base Image class to the improved BufferedImage class available in Java 1.5+. I had actually been using the BufferedImage class for a few things already, but a total transition was not a simple matter. For the most part you can use BufferedImage anywhere you are already using Image, due to the BufferedImage extending the Image class. One huge benefit was that the ImageIO class, instead of the awt Toolkit, loaded the whole image to memory so a MediaTracker is not needed and this sped things up greatly and removed my memory leak. Compared to before, now it only was using around 230Mb, which was well inside my expected limits.

The big problem that I ran into while changing over my code was that the sneaky way I was loading my animation class into my image array on the engine side would no longer be feasible. Originally, I had found a nifty solution to my space problem, with a bit of tweaking my animation class could extend the Image class and then be inserted into my image array for quickly accessing both the game animations and images through one simple method call. It also only needed minimal conditional checks which I already had in place. While this worked well with Image, there is no such luck with BufferedImage. You can serialize a class that extends BufferedImage, but you cannot deserialize it, because as it lacks the "no argument" constructor needed to reconstruct the base class. This left me frustrated and quite annoyed, by this point the editor was ready and working, but the engine would not accept the image data.

A few hours later, I had separated the animations from the image array and while I was sad that I had to abandon my unique solution, it was probably for the best as it is easier to figure out what the code is doing now. This is where big problem two popped up. I was using a PixelGrabber to export my images to an int array and then reconstructing them with MemoryImageSource and the awt Toolkit to load them back into an Image and then forcing them into memory with a MediaTracker. This was not going to work as I was now avoiding the Toolkit and MediaTracker classes. With some more research I finally pieced together that I could do a similar process but this time to a byte array. A simple example of what I am doing is listed below.

Exporting is the same as saving the image, but to a byte stream instead of a file.

ByteArrayOutputStream bstream = new ByteArrayOutputStream();
ImageIO.write(img, "png", bstream);
byte bytearray[] = bstream.toByteArray();

To convert the image back you can read the byte array using a ByteArrayInputStream.
BufferedImage image = ByteArrayInputStream(bytearray));

At this point, around ten hours after I started, I now had a working engine and editor again, however it was rendering slower than before. I commented out a most of the drawing and logic method calls and found that the engine would run around 50 FPS with minimal drawing, on the other hand, with all calls back on it ran around 8 FPS. Interestingly enough, the logic was the issue, instead drawing a single large transparent image of the GUI was causing a 40 FPS reduction. I was aware that transparency always causes a speed reduction due to increased processes needed to render it, but not by that much!

It took some more searching to figure out what was wrong. In Java images come in a few different types running in different modes, examples of these would be Image, VolatileImage and BufferedImage. I found VolatileImage quite fascinating as it is the fastest since it is always stored in the graphics hardware memory, but the trade off is that it always may or may not be available due to the possibility of it being overwritten by something else in the limited space of video memory. You must repeatedly check to see if it is still there, the descriptions of this made me think of trying to arrange a large group of very hyper children into a pattern, but at any point they may scatter. Luckily, with new changes in the BufferedImage class it too tries to run in video memory if possible, but only if it is setup to be compatible with the current video configuration. Apparently, using the ImageIO read method does not always create the fastest images, it was suggested that you create a more compatible image using the current graphics configuration and draw the image onto that. This also seems to only cause a minimal increase in processing that is easily outweighed by the huge increase of rendering speed. After converting all the SNEngine's BufferedImage objects with the GraphicsConfiguration createCompatibleImage() method a huge increase could be seen, going from 8FPS to around 35FPS, my target being 30FPS.

I still have some more work to do on increasing efficiency, but this tedious transition was an eye opening experience. As usual I have learned more than I expected, but this new information on Java graphics will probably come in handy later. (NOTE: This was found to be true on OSX 10.6 compiling for Java 1.5+, while most things would be similar on other platforms, the low level hardware acceleration for graphics does slightly differ on each operation system and JDK.)

March 1, 2010

Java JNI Custom About Box

It is silly just how long it took me to figure out this one line of code... A few months ago, I had finished porting over my SN Project from standard Java to then new JNI Library in Xcode 3.2. However, no matter how much I poked and prodded and searched, I could not figure out how to override the simple About Box I am assuming is provided by the JavaApplicationStub.

Luckily, today I finally stumbled on the answer in an obscure thread on the Apple Mailing List. The short answer is that I was not telling the event that it was already handled before letting the method end, the default box was called as a result of this.

You can see below that you need to add setHandled(true); to the handleAbout ApplicationEvent that is in the OSXAppAdaptor class provided with the JNI Template, or where ever you are handling the EAWT action calls.

public void handleAbout(final ApplicationEvent e)

I actually feel a bit stupid for not noticing this sooner as a few of the other action call methods already are setting handleAbout but it was not that clear what it was doing so I completely over looked it. Well, with this fixed I now have all the kinks worked out, now on to finishing up the features and finally hitting a stable 1.0 release, though it may take another year.