Thursday, March 18, 2010

Took me a while

In order to keep up the good tradition of this blog, famous for its quality content, I had to meditate for months.

Work, as usual, has not been forgiving with my spare time. Many things also happen outside the event horizon of a computer screen, and it eventually becomes increasingly hard to clear up thoughts and write down stuff.

Blabbering aside, I've managed to finally write a working prototype of the famous Entity System. Not only that, though. The Rainweaver Framework (guess I'll stick to this name for the time being) has been given lots of love as well, and despite my constant refactoring, I can say I'm 80% satisfied with the result. I made a post at Gamedev.net but stirred no interest. Ah well. For fun and not for fame, right?

This Framework of mine is a hobby project. I'm giving myself the chance to learn new technologies such as WPF as well. It has a steep learning curve. It's true. If you want to learn WPF, you gotta have the time to. Time to sit down, open two Visual Studio instances, download Family.Show and get dissecting. Time to wade through thousands of tutorials looking for best practices with a dim light.

With this spirit, I decided to take up a WPF project related to the Entity System. It's called Prototyper. You can still see in the changelogs that the project was named ArchProto, then ProtoArch, it turned WPF from WinForms (just like growing adult), and got its final name.

Project conception aside, I'll explain briefly what it's supposed to do.

Every Entity System has a Schema and a Runtime. The first says how things work, the second makes them happen. Between them, there are Runtime Globals and Scripts. Runtime Globals are implementation-specific methods or properties exposed by the runtime to the schema. Scripts can be modified at runtime (atomically, i.e. no undefined state across modifications) and are generally kept in some kind of storage (files, db).

Prototyper is being written to allow designers to define components, messages and prototypes in a visual fashion; associate a Runtime Globals type for scripts; associate scripts from a typed script storage object to message handlers; compile a schema to CLR Objects.

The Entity System was a challenge and still is, as it's not 100% done. But a bigger challenge lies ahead with this Prototyper thing.

If you feel brave, head over at http://rainweaver.codeplex.com/ and download the latest code drop, play with it, examine the code.

Thanks for reading.

Bye-bye!

Thursday, September 17, 2009

"The Moon Won't Shine"

"...It's broken down" to use Eric Clapton's words.

Lua. Enemy of mine. I love the language, don't get me wrong. But I spent oh so many nights on my Lua VM that now I burst into tears every time I glance at the project file in Windows Explorer.

First and foremost, while I believe Lua source code is highly optimized C, it kinda looks obfuscated. It gets quite hard to understand what's going on, so hard that I haven't managed to understand how function calls work. And this makes me very sad, angry, people hate me, and I hate the world. There's a lot of pointer mumbo-jumbo to maximize speed.

Calls are pushed on a "call stack". They seem to contain stack top and stack base information. However, a lua thread has a notion of stack top too. You'd think the current call info on the call stack has all the info you need in order to push values at the right index; but no, there are about two stack tops, and the macro hell makes everything even more difficult to understand.
This is more a practical problem, rather than an issue with the language itself.

Something more interesting would be discussing about the possibility of porting Lua on the CLI. I bet as soon as .NET 4.0 will be out, there will be a plethora of language implementations on the DLR.

Anyway. Lua is a strange beast, strange in the sense that Lua values just can't map to POC objects. This means that you have to implement proxy objects that wrap the same semantics of a Lua value. Not good. Quite bothersome. Prone to run slow - slower than the original C implementation, and especially so in a VM that would decode bytecode on the fly. I have faith in IDynamicObject of the 4.0 FCL.

Practical doubt: what if the Compact Framework got runtime code generation as well? All my efforts in creating a Lua VM would become useless. And right now, seeing as I can't unravel the mysteries of the Lua implementation, I'm not even enjoying the effort in making one. Yes, I have a better picture of how Lua works, but no, I don't have a full, clear picture.

Annoying.

As usual, if you don't know what you're going to code in advance, things are bound to become a mess. I thought I nailed function calls, as the output of a few test scripts would yield exactly the same results of the original Lua VM - however, things got crazy when I had to implement protected calls and coroutines. And vararg parameters. Not enough analysis, and that's the result.

For the record, it is just wrong in my opinion to use VES exceptions (oh, fancy details) to change the program flow of a Lua script (which in turn calls plain old CLI methods). The implementations I've seen just do that, seeing as the original C code uses setjmp, longjmp facilities, or C++ exceptions if #DEFINEd to do so; and it becomes hard to model Lua errors in a different way. However, I think it is possible to avoid using Exceptions (costly) with a not-so-nice set of flags that should be checked in strategic places. This might produce code that is not as readable, but it's bound to perform better. In this case, I think it might be worth to sacrifice readability - yes, purists will cry and all.

I've also discovered F#. I have too many things to do, and since work has highest priority (go live closer and closer), chances of getting any of my personal projects done gets slimmer and slimmer with time.

F# is very cool. But damn it, functional programming seems to require some kind of CS education, which I do not have.

Another challenge. Yeah.

That's about it, thanks for reading.

Nerd joke I came up with while talking with a friend some time ago:
"You don't have to kill me; just set me to null, the garbage collector will take care of the rest". I thought it was fun. :P

Wednesday, July 22, 2009

Of Virtual Machines

Hello my dear readers, I have been away too long.

I wanted to give a little visibility to my latest efforts: a Lua Virtual Machine. Yes, you got it right, a Lua Virtual Machine! Bytecode in, execution out! This also means that as soon as the bytecode format changes (and it can happen anytime at discretion of the creators of Lua) everything breaks.

Needless to say its based on the works of Fabio Mascarenhas (Lua2IL) and Kein-Hong Man (A No-Frills introduction to Lua 5.1 VM Instructions).

I really love Lua, I think it's a wonderful language. Not perfect, but it features metamethods and if it's something with "meta-" built-in, it must be cool (once they were called tag methods, as you can glimpse from the source). Jokes aside, metamethods seem to offer this nice extensibility and hooks to language events (indexing, new index creation, various operators, to name a few) that make every change a little microcosm of new features.

Back on topic, this Lua VM is meant to run on non-DLR-ready frameworks, such as the .NET Compact Framework. You might argue that System.Reflection.Emit comes before the DLR, but the latter is the future (had too much of IL emit at once in the past) and I like to look ahead.

I know there have been many wise guys at work on Lua implementations on the CLR, but I took up this challenge mainly for fun and not for fame. If it turns out to be complete enough for regular usage, even better. There's a lot to learn from the learning itself.

You can find the work-in-progress source at the Rainweaver Framework Codeplex page. I make a lot of changes all the time, but the core library should be stable enough to use.

I tried to use Irony to create a Lua AST generator but it looks I'm not smart enough, and I had to fall back to other solutions for the moment. It was cool to find out about monadic parser combinators (LukeH's weblog and Brian's weblog) - read them up because they will make you super smart and taint with functional sexyness your imperative ego. As soon as I manage to play with those concepts a bit, I'll be sure to post more.

Waiting for feedback as usual,
Yours truly,
Rob

Tuesday, May 12, 2009

Parallelism, Scalability, Persistence - Linkage

Thanks to a bad neck, I finally have some free time. Why not spam the blog, I thought (and why not try Windows 7 on a VM, zomg it's amazing).

While searching for cool programming stuff, I found some interesting links I have to share with you. Just two for the moment! Don't be greedy.
Smoke framework, for starters. The good thing is that what I had in mind is scarily similar to what Intel done, and this makes me a happy gummybear.
Retlang, by Mike Rettig, which seems to be the answer to my concerns about thread-safe message passing.
(And "
Stackless C#", by Tim Mcfarlane, which has led me to try and learn how Irony works in order to recreate a Lua CLR compiler that supports coroutines without fugly hacks - but that better be another story for another post).

While those two links lead to resources that can only inspire the mere mortals, they also made me remove my Google Code page until I completely embrace their paradigms. That's to say I wasn't exactly happy with what I'd done and I wanted a fresh start. I think by now you noticed my big problem with finishing my own projects.

Now, you might be wondering, what's this to do with the post subject? Very well. Good question.

Smoke shows an important concept: that is to share the workload among n threads. However, as usual, it is easier said than done. Sharing the workload requires a careful design, so that nothing is left to compromise and everything falls nicely in place when all computations are done.

However, one of the biggest challenges is to actually find good heuristics so that work is spread evenly across threads. You don't want to waste time counting those threaded sheeps jump past the fence. The other is to allow an immediate flow from producers to consumers.

I have a good example of the mental maze you find yourself in once you try to make sense of the above; your rendering thread is told to render something you haven't yet loaded. An animated mesh, for instance, along with its textures and animations and whatnots.
If the rendering waited idly for that asset to be loaded, we'd be back to square one. It becomes difficult to remove dependencies from systems that'd naturally rely on them. What would the pragmatic developer do? Perhaps identify each resource with an ID, and check against that to verify if the asset has been loaded. However, spamming IDs all over the place lacks of elegance, and the initial problem leads to another sub-problem; how to organize data.

There are two kinds of data I can see right now; in-memory data, and assets to be eventually loaded in some memory (they're initially stored on disk, any type of assets, from textures to ai-behaviour-definition files, the latter being just made up).

Both have a stage in which they are not utilizable, as they're to be either created or loaded. And even when you've loaded data, there's another possible step, that is going back to storage. Persistence.

All of this needs a common interface: find necessary data, load it, access it, save it again. Rinse and repeat. Add two tablespoons of sugar. Did I mention data has to be immutable? My head is hurting.

Oh, damn. Lunch time. I was about to make a nice diagram. I'll continue later. >:)


Edit: nice diagram (click to enlarge):

A Rain Song: now with nice pictures!


Why two servers with the same processing units? Our ultimate goal is to plug one more server and see performance double instantly. More or less. The same goes for three servers, and so on.

More on this later, car's getting brake pads changed and I need to take it back.

Sunday, April 19, 2009

Parallelism, Scalability, Persistence

In order to not let the blog wither, I think I'll post some notes I took during my stay in Milan. I won't bother you with the details of these past months, as I've been beyond busy... perhaps super busy. I've been working on a data relay system for an international company (first big project, yay!), and I don't feel like enumerating the crapload of snags I hit along the way. I've been using WCF, for the curious. Nice technology, if confusing at first like most MS frameworks.
Anyway, there we go. I'd love to see some feedback, if any.

--

Parallelism means performing many operations with none waiting on another.

Scalability means opening up parallelism over separated but cooperating processing units.

Persistence means the capability of a state to be saved and restored in its entirety
at any point in time.


In order to achieve parallelism, data must be immutable, there must be as little data contention as possible, and no locking, as a corollary of the previous statement. In order to achieve scalability, operations must be serializable, and so their results, so that they can be shared across different parallel processing units. The cost of sending data across a communication channel must be less than the sum of the processing costs on a processing unit; this implies fast communications and smart work-stealing heuristics.

In order to achieve persistence, the selected storage must be able to create a perfect copy of the data being sent to it, save it in a lossless way and retrieve it later; this sequence must be able to happen at any point in time to prevent sudden data loss.

--

The data upon which operations are performed and the operation themselves must be local, that is they must be in the same processing unit. Whenever a state* happens to be partially shared across two different processing units, the one with less workload will receive a full copy of the necessary data and carry out the involved operations. This can be seen as an implementation of the concept of "ghosting".

* A state is a collection of operations to be performed towards a specific result, along
with their determinant data.


Data can also be proactively shared with processing units that don't yet need it. Sharing in this case means transfering a clone of the data until the data is either not needed anymore or completely trusted to another processing unit. Proactive sharing might happen when data gets close to the processing unit boundaries, which can be both abstract or physical (different machines linked by a connection). Data should be sent only when it changes.

I hope I'll be able to post more.


Wednesday, March 4, 2009

Oooh.

My best virtual croatian friend Joe Basic and I have opened a Google Code page.

http://code.google.com/p/rainweaver/

The Rainweaver Framework (it's a codename, whatever) is a set of libraries for game developers. You can read about our evil plans on the first page. There's something you can download, as well. Let us know what you think, of course!

If you know C# and you're the reliable type, consider joining us. We're busy with work, university, and all the rest, so it's a long term project.

Thanks for reading.


Monday, January 19, 2009

Work, work, work

I know you've been missing me. I've been missing you too - blog of mine.

Anyway, it's been a super busy period with super big things to get done. For instance, I wrote a Dynamics NAV text fob parser in order to create a versioning documentation against two databases. It's been both a pain and a pleasure. I had a few "eureka!" moments worth all the stress I went through in order to deliver the tool in a timely manner. I also managed to understand parsing / compiler theory a bit more. You never stop learning.

I also took the opportunity to brush up my design patterns knowledge. I've been messing with MVC and today I've been reading up about MVP - the former being model-view-controller and the latter model-view-presenter. Curious? I'll post a pretty scheme I made tomorrow. It's pretty, I swear. Edit: and here it is:


I learned some new tricks in these months, and I got to the point where you have to actually sit down and try to make everything snap in place.

  • MVP? World editor. Absolutely.
  • Object persistence? Game server. Finally I realized the true form of the Entity System (more on this later).
  • System.Threading? Do more with less. Locality of data - as few shared states as possible, lockless when you can, otherwise don't bother.
  • System.AddIn? Useless, use the MEF and do yourself a favour; however, you learned that proxies and abstractions are a good thing to decouple the contract from the implementation - and if you're smart (like the System.AddIn guys) you have a version tolerant framework without even breaking a sweat. And here comes a platform-independent engine API...!

Thanks for reading.