Swimming the Matrix


Why you never put all your eggs in one basket

I’m about as big a proponent of the cloud as you can get. I think almost every company could find a way to utilize cloud services to decrease their costs. But, naturally, most people don’t use the cloud ideally, as shown by the recent outage of Amazon’s AWS service which killed large sites like Reddit and Netflix, which apparently run almost entirely in the cloud.

If you’ve played with the cloud, you may be wondering how that’s possible. After all, the point of the cloud is to have a decentralized system in place so that it can’t go down. If a server goes down, another one is instantly ready to replace it. Amazon has different zones so you can create server instances closer to your users and hopefully get better performance. And if, as has happened recently, a data center that represents one of the zones does down? Well just spin up new instances of the lost servers in another zone and continue on your merry way!

By the looks of this article, however, that hasn’t happened. Companies, it seems, put a little too much faith in the reliability of Amazon and other cloud providers. Part of that is Amazon’s fault for playing up the fact that they are the Titanic, the unsinkable ship. But the other part of the blame is squarely on companies that don’t plan for failure.

“The best-laid plans of mice and men often go awry” comes to mind right now. Failures do happen. Software becomes corrupted, hardware fails, lightning strikes take out a data center, and chaos will ensue if you aren’t ready for it.

Let’s look at the recent Amazon EC2 zone outage as an example. If I were running a large web site, firstly I’d likely have mirror images of my server structure in several instances, from Europe to America, so that everyone got a good connection to my site. This would also help when a zone went down, because if I was watching it, or had a clever script doing it, it would migrate that load to other zones. For instance, if like in the current example US-East goes down, I’d migrate traffic to US-West until the problem is resolved. Sure the site may be a little slower than usual for people on the East side of the country, but my site is still up.

Cloud technology not only lets you scale your site up and down as demand spikes occur, but you also can expand geographically. Every major cloud provider has multiple data centers. Take advantage of them, even if only so they are there in an emergency. And if worst comes to worst, have a backup cloud provider ready to step in if someone at Amazon messes up an upgrade.

Published by meatshield, on August 9th, 2011 at 10:30 am. Filled under: Uncategorized Tags: , No Comments

An Introduction to Using HTML5 for Mobile Development

I’ll just out and say it: I’m extremely lazy efficient when it comes to programming. I want to do it the quickest, easiest way and be done with it. I’ve had a few opportunities come up to work on mobile applications. I love the idea of mobile applications, but hate the idea of coding 3 or 4 instances of what is basically the same application in C, Java, C#, etc. Luckily, HTML5 makes mobile development a lot easier than juggling languages and platforms and APIs.

HTML5, in the broadest sense, was created to fill in some of the gaps the HTML standard had. For instance, HTML4 does not support video. Or audio. Or animations. Or geolocation. Or local storage beyond cookies. So you see we have a bunch of limitations that HTML5 gets us around.

To use HTML5, you don’t have to get your users to upgrade anything. Unless they are still on Internet Explorer 8 or lower, they likely are already capable of using HTML5 sites. Then it’s just a matter of adding some new tags and some new parameters and you are off and away.

For mobile development, this gets really useful because all mobile browsers for smartphones are HTML5 compatible, at least in the most common areas such as video. So if we wanted to have an application to say find us the closest pizza place, you could use the Geolocation feature in HTML5 to get the user’s location in order to find the closest pizzeria.

Of course, there are some limitations to this approach. Despite getting access to some hardware, such as the GPS, we’re still unable to use other hardware features, such as the accelerometer or the camera. That’s where something like PhoneGap comes in.

In a nutshell, PhoneGap provides a Javascript interface via a device-specific native module that runs in the background, allowing you to write an app using HTML5, CSS3, and Javascript, and access the camera and other hardware features. This further negates the need for a full mobile app.

There are some caveats to this approach though. While it will vastly speed up your development time, don’t get caught thinking you’ll be able to write once and instantly deploy to all platforms PhoneGap supports. For one, not all features are supported by all phone operating systems, either due to lack of development effort, or maybe some security system in the device’s OS. Android and iOS, being the most popular, naturally support everything, so if those are your two platforms for development, carry on. The second rub is that there are some quirks that have to be accounted for with each device, such as needing certain options set a certain way to get it to function. Luckily PhoneGap allows you to know what device you are running on and plan accordingly, so you can just put in an if statement or two to compensate.

The last issue, and if you’re just using PhoneGap and HTML5 you might have difficulty getting around, is that everyone will know you’re not making a native app, but a mobile app. This is because the way PhoneGap renders your site is to create a “Web View” (it has a different name on every platform, but a web browser window) and display your content like it’s a website. Thus any links that you have will have a lovely border placed around them when clicked. I refer you to Bank of America’s original mobile apps, which were all a web site, and all very poorly done (hence why they now have native apps). However, through the power of the mobile ninjas, you too can fool your users into thinking your app is a native(-ish) app!

The secret lies in cheating. Rather than going to all the trouble of finding out which CSS styles need to be set to what to make that go away, you can just use something like jQuery Mobile. Yes, that Javascript framework even Microsoft has to love has a mobile version. On top of the usual jQuery you’re used to you also get an UI system that is pretty rich and gives a nice look and feel to your site with absolutely no effort. I threw an example for a talk I’m giving in a few days together in about a day and a half of work, most of which was figuring out why something wasn’t working how it was supposed to (hint: I messed it up!)

Overall, if you’re knowledgeable about front-end web development and don’t need to draw fancy graphics for games or use some unsupported hardware, a HTML5 application could be a nice way to get to market quickly, getting a consistent look and feel across all platforms, and make your updates easier and faster.

If you’re interested, I gave a talk on this subject the other night, and you can watch it in IE if you so choose(it was recorded using LiveMeeting, and Microsoft wants you using IE, not me!). The slides didn’t get translated to the video, but the parts where I shared my desktop did, so you can download my slides if you want to follow along.

Published by meatshield, on August 5th, 2011 at 8:19 pm. Filled under: Uncategorized Tags: , , , , No Comments

The Big Jumbo Post of SQL Exercises

So I just had the most intense day if learning in my life. in 8 very short hours, I went from a relative SQL novice to a real pro, able to do joins as well as I walk with subqueries and all that good stuff. We literally spent all day doing nothing but generating SQL queries using Oracle’s 10g database with their SQL developer on some data sets. I enjoyed the day so much, I thought it would be a great resource for people looking to get quickly up to speed on common SQL queries and tasks, such as join, equijoins, etc. Read more…?

Published by meatshield, on June 18th, 2011 at 1:36 am. Filled under: Uncategorized Tags: , , 2 Comments

An Introduction to Virtualization

So I’m in Dayton for two weeks for company training, and meeting people from other offices. Jim, from the Seattle office, asked me if I would write a post about virtualization, a topic he isn’t familiar with. I’ve been using virtualization tools for a few years now, so here we go with a brief introduction on the topic of virtualization (in exchange for him teaching me about Windows Phone 7, since I know NOTHING about that!)

Virtualization, as it pertains to computing, is the act of running an operating system inside of an operating system. The name comes from the fact that you simulate, or “virtualize” the underlying hardware with software. There are a couple of very nice advantages this gives us, such as being able to easily save the state of the virtual machine, move it to another computer, and even run multiple computers on a single physical computer!

So why would we want to run multiple computers on a single physical computer? One use, as I’m finding out today, is when you need a computer for a short amount of time. For instance, I’m working toward getting my Microsoft Sharepoint 2010 certification. I need a server to practice with it on. Rather than spend several hundred dollars on another computer that’s going to take up space and power, then go unused after I get the certification, I can quickly create a virtual server, play with it, then discard it afterwards without spending a dime. (thanks MSDN license!)

On the business side, one of the buzzwords out there today is “consolidation.” In the business sense, consolidation refers to moving your physical servers into virtual machines, and running several servers on a single physical computer. This reduces costs both in power, but also space and cooling costs as well, so there’s a lot of money to be saved in switching to a virtualized environment. The reason this is possible is because very few resources, whether they are load balancers, storage servers, or web hosts, are ever fully utilized. Many times you may have those resources running at a much lower utilization rate, say 20%. That means that you’re wasting 80% of that server’s power. So it makes sense to take four or five of these underutilized resources and have them share the physical computer so that they utilize near-100% of that computer’s resources.

There are two more benefits to virtualization that your business can get from virtualization: near-zero downtime and resource re-allocation. In a traditional physical set-up, if your server crashes, be it from a bad motherboard or a bad hard drive, you are out of luck until you either get a spare server set up (a process that at the soonest is going to take an hour) or until you bring up the backup system (which, with a decreasing increase in IT budgets is becoming an extraordinary thing). Imagine, instead, that all you had to do was copy the files that contain the virtual server to another server, and turn it on. It turns what could be a several hour process into several minutes, saving you time and money.

The converse of crashes is investment, which brings us to the issue of re-allocation. In traditional infrastructure, any investment in a specific area, such as firewalls or load balancers, or even in specific functions within the infrastructure, such as databases or web servers, are more or less permanent. Any alteration to the structure is going to be a large undertaking, involving your engineers and administrators re-imaging machines and altering configuration files. With virtualization, if you need to move around resources, it’s as simple as turning off resources you don’t need and replacing them with resources you do. This takes a few minutes at most, and since you spin up “fresh” images that are already configured and ready to do, and whether you need one extra resource or ten, the same image is used. This leads to greater flexibility in your IT utilization, meaning that instead of having to invest all of your money in the computing power to hit the maximum usage of all your resources, you only have to have enough dedicated power to meet the minimum needs of the infrastructure, then having the rest in a virtual, mobile environment where you can meet peak demands on resources.

Now, as with anything, it’s not all sunshine and roses with virtualization. For one thing, servers which are running in a virtual machine will have a performance hit inherent in the fact they are not actually running on the hardware. And if you want to go whole hog and virtualize your entire infrastructure to be a dynamic as possible, you will have to redesign how you do your IT business, and design new services to scale your resources, store these images, and all of that. I don’t recommend by any stretch of the imagination that you could switch to a virtualized environment in a weekend. Most likely you will want a specialize to come in and help you in the process. Obviously since I work for Sogeti I’d prefer you call on Sogeti, since we have lots of experience in cloud computing and virtualization (cloud computing is just virtualization in someone else’s data center).

There are a lot of resources you can check out to learn more about virtualization. Amazon has a surprisingly good offering of cloud products that come with explanation as to why you would want to invest in some of their solutions. Amazon has their cloud, and so does Microsoft if you are invested in Microsoft products. There are also cloud offerings from Rackspace and GoGrid. Amazon and Microsoft are probably the biggest providers. I’d also look up “hybrid cloud” since that’s becoming a great cost-saving measure over letting Amazon take care of all of your virtualization needs.

If you want to just make a cloud real fast and see if it’s right for you and how it all behaves, I recommend getting StackOps’ cloud-in-a-CD, which lets you install all the stuff you need to run a cloud in one CD and a web configuration form (see this post). It emulates Amazon’s EC2 offering and that’s about it, but that’s really the core of migrating to a virtualized environment.

As per usual, if you have any questions, you can drop me a line or comment on this post!

Published by meatshield, on June 17th, 2011 at 11:57 am. Filled under: UncategorizedNo Comments

Why Are We Still Using CAPTCHA?

I’m still amazed that everyone still relies on CAPTCHA to prevent automatic use of their site. CAPTCHA, for those who don’t know, are those pictures with letters and numbers you see when you register with a site or after failing to login to a site a few times.

The problem is that the algorithms to solve those and find the letters and numbers in the pictures is better at it than we are! I don’t know how many times I’ve had to re-try after failing one of those tests because the zeroes and O’s look the same! And the spambots and all that still get through despite my inconvenience!

You know what’s better at fooling automation than a picture? A question! If the recent Watson vs. Jeopardy games taught us anything, it’s that computers still have a very difficult time understanding human speech. Even as text it’s typically garbage to them unless you have a supercomputer to throw at trying to figure out what it’s trying to say.

And I’m not talking about needing complicated “A train leaves the station traveling X MPH…” or anything like that. Something as simple as “What color is the sky?(or space or anything else someone with a brain would know the answer to)”. You could easily come up with a database of hundreds of questions, then just match the answer they give with the one on file. It’s a lot faster and easier to answer “apple” than to figure out that it’s trying to get you to enter Fr42Gh. And don’t even get me started on the new ones I’ve seen that make you solve a puzzle. So slow!

Questions are quick, easy to get the answer to, and only add a fraction of time on to the authentication process. Sounds like a win-win to me!

Published by meatshield, on May 23rd, 2011 at 11:27 pm. Filled under: UncategorizedNo Comments

My List of Free Software

As part of my role as “the computer guy” among my friends, I’m often asked what software I recommend to do this or that on their Windows machines. So I figured it’s time I listed it all out so others can benefit from my opinions.

All of this software is free. As in no credit card needed, no monthly fees, install it on as many computers as you want. I’m cheap and so are most people. Best of all this stuff tends to beat out commercial software!

Antivirus

You can’t go any better than the free Avast! antivirus. It’s free for personal user (companies have to pay) but this thing catches EVERYTHING. I haven’t gotten a virus or any kind of malware since I started using it. It updates often by itself, it’s constantly scanning your files, but doesn’t slow your computer down, and it is very easy to use. I typically install it and I don’t have to worry about it after that. A definite must-have for your computer.

Video Playback

VLC. That’s all you need to know. It plays every file type under the sun without complaint, and is very easy to use with a minimal interface. If you watch movies on your computer, you should get this!

Browser

I recommend Google Chrome for just about everyone nowadays. It’s fast, secure, and with a minimal UI it doesn’t clutter up the screen of your laptop. It updates often and has tons of plugins to give you extra feature. Firefox is another option (really anything other than Internet Explorer is a giant step in the right direction), but I find that it runs a lot slower than Chrome, and after a while tends to consume memory like it’s going out of style (which in the programming world is a big no-no!)

CD Burning

If you like to make mix tape CD’s, or like me burn the occasional ISO file, then InfraRecorder is a wonderfully easy solution to burning CDs. It’s the same as Roxio and other software that costs money, but it’s free, can even be run from a flash drive, and is quick to boot.

PDF Reader

Everyone always thinks Adobe Reader is the only PDF reader out there, especially since it’s included with all kinds of software. The problem is that Reader is horribly slow. For a better alternative, try Foxit Reader, a free alternative that does not disappoint. It opens fast and gets out of your way so you can get to reading that document.

PC De-junker

Windows needs regular maintenance in order to not bog down over time (well, not bog down as fast anyway, it’ll still bog down over time no matter what you do in my experience). However, CCleaner does a great job at running some various tricks that can clean up some space on your hard drive as well as speed up your computer, from removing temporary files that didn’t get deleted to deleting old values from your system registry. All of this speeds up your computer if you do it regularly. Piriform, the company that makes CCleaner, also has some other nice free software to help with your PC de-crapping needs.

If you got to the end of the list, you’ll notice a few things missing. Yes, Microsoft Office is still the best word processor, spreadsheet maker, presentation creation tool out there, although you can try Google Docs and LibreOffice if you want. They work well in certain use cases, but not if you are a hardcore user of any of the Office programs. Likewise you can’t beat Photoshop for image editing, though can give GIMP a try if you’re feeling brave. iTunes as well is still my music player of choice since I can’t seem to give up my iPod.

All of the things I recommended and more can be easily bulk-installed used Ninite, a great site where you just select which pieces of software you want, and it downloads and installs them automatically. It’s great if you just re-installed Windows or got a new computer and want to set this all up with as little hassle as possible.

Published by meatshield, on May 22nd, 2011 at 9:41 pm. Filled under: Uncategorized1 Comment

Question From The Crowd: Hard Drive Space

In the last week I’ve had this conversation come up two times, so I thought it must be worth a blog post to discuss. The question revolves around why, when you buy a product that advertises N GB of disk space, do you never get that N amount of space?

The answer is that there are a few factors. The first is that the storage industry and the software makers differ on what a GB(gigabyte) is. A gigabyte is defined as 1000 MB(megabytes) for hard drives, but in reality it’s 1024 MB. Why is this? Well 1000 is easy for people to understand, where 1024 is more confusing, if more precise. If you look on boxes for hard drives, it always lists in small lettering that they define a gig to be 1000 MB. They also define a MB to be 1000 KB(kilobytes), when again it’s 1024. In this manner, they are able to skimp you on a bunch of extra space they’d rather not add in, while still being able to advertise it as N GB.

So our first problem is a marketing one, because of the 1000/1024 difference (you get 1024 if you use the binary system, and double your amounts starting at one, so 1, 2, 4, 8, 16, etc. This is why two 512MB  sticks of RAM is 1GB). The next issue is the file system. The file system is a system put on the hard drive that tells your operating system how to understand all the little 1’s and 0’s on your hard drive and be able to find files and folders. This mapping system takes up some space, and the more spacious the hard drive, the more space the file system is going to need in order to keep track of all the bits and pieces. Also more “complex” file systems will need more space, such as if it supports directories or journaling or other nifty features.

Lastly, and this depends, is your operating system. If you have an operating system installed on a hard drive, that eats up space you could use for something else. It can be several gigs in the newer operating systems. On mobile devices you won’t have to worry about this, since they store the operating system in a special area.

Published by meatshield, on May 22nd, 2011 at 8:35 pm. Filled under: UncategorizedNo Comments

I can haz (free) cloud?

One of the projects I’ve wanted to try for a while was to build a cloud like Amazon’s. Naturally I don’t have hundreds of computers lying around like they do, but it is possible to set one up on a single computer, if you know how.

Before we look at our options, we need to answer why you would want a cloud. There are many reasons it might be beneficial to run a cloud. One is that you can turn one computer into 2 or more. This allows you to take a computer that might not be constantly in use, and split the work it does between two or more computers. This has advantages in software development, or if you wanted to have one virtual server doing one task and another one doing another. Since you can also “save” instances, you can scale up and down your projects as needed, so if you suddenly need 3 more servers to test with, you can do that with the push of a button, no additional hardware required.

Before embarking on making your own cloud, I recommend playing around with Virtualbox, and see if you find yourself using another operating system or even another “computer” enough to justify the time. I’ve been using Virtualbox for a very long time, and I’m always spinning up various Linux versions with it to play around with new things in the Linux world.

If you wanted to do it the hard way, you might look at Eucalyptus, which is the original free private cloud software. However, having tried it, it’s very difficult to set up, even with their guides. Ubuntu Enterprise Cloud is free and built on top of Eucalyptus to try to make it a little easier. Of course then you are tied to Ubuntu, which isn’t a problem for me so much since I use Ubuntu regularly for all kinds of functions. But it’s still problematic because…

Ubuntu just announced they were moving to support OpenStack, an initiative from dozens of companies to build a comprehensive set of open-source cloud software. With it you can build the major components of Amazon, mainly their EC2 virtual machine farm. That’s what I did and what I’ll be focusing on right now.

OpenStack, I will admit, is still a little raw. It has some confusing documentation, and the community is still small (case in point, they got a forum this week finally…). However, they do have plenty of documentation, and since Ubuntu has contributed their UEC software, you can look at a lot of Ubuntu’s documentation and apply it almost directly to OpenStack.

Now, if the idea of installing Ubuntu and then beating your head against the wall as you install packages and configure it all sounds like a bad job, then one company is going to save you a lot of time and frustration. They are called StackOps, and they are a cloud consulting company. However their biggest achievement which relates to his post is the StackOps Distro. This installs Ubuntu with all the packages needed to run OpenStack Nova (the EC2 clone), configured and ready to go free of charge.

Best of all when you get it all installed it will tell you go to a web address on your server. This will take you to their StackOps Smart Installer, and walk you through configuring your cloud. It explains in perfect detail everything you might need to change, but advises you to keep most of it as it is. After going through their wizard, it sends the configuration commands to your server, and you are ready to go. Then all you need to do is log in, create a user and project, and start registering custom server images to use in your cloud (There could be a whole post on that, look it up if you’re interested since it’s pretty complex).

It’s not entirely sunshine and rainbows with StackOps, but it’s close. For one thing, I had to install it like 4 times since I kept messing up the configuration. So here are some tips to save you some time:

  • Don’t install your server with LVM. StackOps/OpenStack doesn’t seem to support it, so don’t bother.
  • For the public range of IP addresses it asks you to enter, be sure that’s the IP addresses your router deals with, so for mine it’s 192.168.0.1 for the router, so I would put in 192.168.0.64/28 in. Yours might vary
  • If you listened and didn’t install LVM, make sure you set the storage controller to sda5 or an empty hard drive. It has to be an empty partition or non-partitioned disk. The first page it takes you to lists all the partitions. Unused ones will show a -1 for the used size.
  • Put the disk back in after the install. Don’t ask me why, but it makes you take it out after installing, then requires it to do the automatic configuration. So you’ll skip an error by putting it back in once the server is up.

If you want to use a web GUI, they have the OpenStack Dashboard project. It is VERY raw(not to mention very odd to install), so I would skip it and do the basics manually in the command line, then use something like HybridFox to spin up instances, assign them public IPs, etc. And be sure to get version 1.6 of HybridFox, since 1.7 doesn’t support OpenStack.

Now that you (hopefully) have the cloud up and running, there are a few things to know. I’ll assume you looked here and downloaded and bundled the Ubuntu image it has. You want to do that. You can also use the Eucalyptus images here, but keep in mind you will have to launch those with a keypair (see here) or else you can’t ssh into it.

Hopefully that’s enough for you too to spin up your own cloud. Let me know in the comments if you run into problems with it. I’ve probably had them myself since it took me a week to figure out how to get it all up and running!

 

Published by meatshield, on May 17th, 2011 at 6:20 pm. Filled under: Uncategorized3 Comments

An Interesting Problem: The AI Director

For those who have never played Left 4 Dead and it’s sequel, one of the most interesting parts of the game is the fact that every time you play it’s different. There are different numbers of zombies in different locations, keeping the game fresh. This is handled by something Valve Software, who makes Left 4 Dead, called the AI Director. In this post I’ll go over how you could make a rudimentary AI Director of your own.

I’m going to put the caveat on this that I’m not a game designer or a specialist in AI. I’m just a programmer who thought I had a basic idea of how you could do this and wanted to share :)

While they won’t say how it works, they do mention that it scales the difficulty based on how well the team is doing, taking into account things like what items they have with them, their current health, and other various factors involved in that game. So basically to make an AI director for your game you need to have all these stats tracked. That’s the first step.

You also need a facility for dynamically spawning enemies. In other words, if your level is represented in a static state, such as an XML file, it becomes much more of a pain to spawn enemies in a “realistic” way since you run the risk of increasing the difficulty of the game too fast. So you have to make the game dynamic from the start.

Another thing that Left 4 Dead doesn’t do that I think it should is you shouldn’t put a cap on how hard it can get. I’m going to take Magicka (the game that gave me the idea) as an example. In Magicka, you don’t have any ammo or limitations on abilities, so to make the game more difficult, you have to spawn more and stronger enemies. And since you can use the most powerful spell in the game from the get go, that difficulty needs to rapidly increase if the players are good enough to tear through the game. Sadly Magicka doesn’t do this, but if they did, here’s how they might do it.

Here’s some basic psuedocode to show how we might go about the basics of spawning enemies purely at random. We’ll improve on this later, but this gives us the basics of what we’re trying to do:

[sourcecode language=”text”]
//We assume we have some object representing all monsters as well as all the info on the players’ current state
get player states
if players are doing well
increase difficulty level
else if players are doing poorly
decrease difficulty level
//else if they are doing like we want them to, leave it the same
randomly generate a mob
find a position off screen nearby
foreach monster in the mob
select a spot near your position that’s a valid spawn location (i.e. not in a wall, etc.)
if it’s a valid position and nothing is currently in that spot
spawn the monster
else select a new spot and keep trying until you spawn the monster
end foreach
[/sourcecode]

First you need to put some constraints on your system. For instance, you can’t go from spawning low level creeps to suddenly spawning the unkillable boss. There has to be some progression. I’m a fan of normalizing data in these situations, so let’s assign a point value to any mob, and make this the “difficulty” of the mob. Then we’ll have an arbitrary amount we allow our difficulty to fluctuate so there’s some wiggle room in what difficulty we spawn.

Next, depending on the game type, we may need to change how we select our position to spawn. In games like Magicka and Left 4 Dead, the players are playing through a story, and are defeating waves of monsters in the way of their victory. As such, the constraint is that everything must spawn ahead of them. This is also efficient since spawning things behind them means the mob would have to play catch-up, meaning we waste resources on some enemies that might never actually be seen by the players.

So let’s add this in to our pseudocode:

[sourcecode language=”text”]//We assume we have some object representing all monsters as well as all the info on the players’ current state
get player states
if players are doing well
increase difficulty level
else if players are doing poorly
decrease difficulty level
//else if they are doing like we want them too, leave it the same
randomly generate a mob
calculate the mob’s difficulty rating
if the difficulty of the mob is beyond the difficulty wiggle room
make a new mob until you get one within the range we want to spawn.
find a position off screen in whatever parameter zone we set (in front, to the sides, etc.)
foreach monster in the mob
select a spot near your position that’s a valid spawn location (i.e. not in a wall, etc.)
if it’s a valid position and nothing is currently in that spot
spawn the monster
else select a new spot and keep trying until you spawn the monster
{optional} start the monster moving towards the players’ current position
end foreach[/sourcecode]

There’s one bit of extra stuff in there I added, which was to start moving the mob towards the players. This is a good practice so that the resources you have spent on the generation of this mob go into use. It’s no good doing all this work to spawn a random mob and the players never find them. So we want to send them towards the players.

With our basic design flow done, let’s start looking at some specifics. We’ll start with calculating how the players are doing. Again, we need a basic normalizing routine. Unfortunately this is going to depend on the game, but let’s take Magicka as an example. The game is unique in that the players are constantly able to heal and constantly able to attack, so going with a look at their health and their ammo count doesn’t really help us out. So let’s look at a few other unique aspects in the game. One of them is that death in the game is very common. So we can look at the number of dead players as an indication of how well the team is doing along with how many have low health from being recently resurrected. If they are all alive, they are probably alright, but a check of their health might show that it looks like two were recently resurrected, or are about to die because their health is low. This might show the team seeing some difficulty. Another factor might be the number of enemies nearby. How we can normalize this is to take every monster and calculate their distance from the players. Closer monsters are a bigger threat, so if there are a lot of monsters close to the player, you would hope the game is currently more difficult. We also have to take into account off-screen monsters as an indication of how the game may be progressing in the near future as those monsters come in to play. Naturally you would probably come up with a more advanced formula than this, but let’s go with this for the sake of illustration, assuming all we care about is the health of the monster in terms of the monster’s stats:

[sourcecode language=”text” light=”true”]current difficulty = (sum of player’s health[0 if they are dead]/some player average constant)
*(monster 1 health*distance from nearest player)+(monster 2 health*distance from nearest player)+…
+(monster N health*distance from nearest player)[/sourcecode]

That’s probably not the best formula, but it gives us an idea of how the players are doing. The player average constant would basically put either an increasing or decreasing value of the rest of the difficulty. So if all 4 players have 100 health and the constant is at 200, the player’s contribute X2 to the difficulty, meaning it’s going well for them. If all are dead but 1 who has 1 health, the number becomes the sum of all monsters/200, so it becomes small. Thus we can set a threshold of what is “doing good” and what is “doing bad” at the game. So say if the current difficulty is greater than 1000, the players are doing well, and if it’s less than 70 the players are doing poorly, and we can adjust the difficulty accordingly. This also means we can radically change the difficulty based on our thresholds, so if the players are significantly above our threshold for doing good, we need to increase the difficulty more than if they are just barely above it. Of course if you want to do that you probably want to track their progress and if they are consistently doing well up the difficulty rate considerably.

Next is mob generation. If we have a difficulty level we are aiming for, how do we know if we are matching that with the mobs we generate? Let’s say the current difficulty is set to 100. We want to create a random mob with a value of 100, or at least close to that. Computer Science students will recognize this as a nice application of the Knapsack Problem, where you try fit as much stuff of the highest value as you can into a fixed size “box”. The one difference is that we don’t necessarily want to maximize the “value” since we’ve normalized our monsters. So if we did it right spawning 100 monsters of a value 1 should have the same difficulty as one monster of value 100.

So here’s a nice pseudocode way do create a random mob assuming you have an array representing all your monsters:

[sourcecode language=”text”]int difficulty = whatever our difficulty is
while(true)//we break out below
randomly select a monster to try to insert
if difficulty – monster’s difficulty value > 0//we can insert
add the monster to an array of monsters to spawn
difficulty = difficulty – monster’s difficulty
endif
if difficulty < our wiggle room value//we can spawn this
break;
end while
go on to spawning our monsters[/sourcecode]

We have one final little bit to discuss, and that’s when to generate a mob. Your first instinct might be to spawn them either totally at random or at set intervals, but one thing I think you want to avoid is the idea that you can predict the flow of monsters, i.e. you kill a mob, move ten feet forward, kill another mob, move ten feet forward, etc. Nor do you want to spawn completely at random since you could spawn two difficult mobs on top of each other that would vastly ramp up the difficulty and might overwhelm the players. So we need to add in another component to difficulty, which is how often to spawn. So what we’ll do is put in a delay on the spawning where the spawning mechanism will hold for a randomish period when the game hits a certain difficulty. We also probably want to put a limit on how close two mobs can be to each other to prevent spawning on top of another mob. That’s up to you how to do, but it’s a nice way to keep the game very random.

And that’s really all the theory behind how you might go about creating random enemies. One thing I want to mention in implementation, which I’m not going to do since it’s too specific to your game, is that you should put all your constants and static data, such as monster attributes, in a text file. This makes it really easy to balance the game just by tweaking those values. If the game gets too hard too fast, increase one number and you can bring it back into line. I haven’t really deconstructed a game since Battlefield 2, but all the values for all their weapons and vehicles are in text files in that game, and they mentioned in at least one interview that they spent a lot of time in a text editor changing values slightly to balance the game.

And finally, here’s the complete pseudocode for our theoretical AI Director:

[sourcecode language=”text”]//We assume we have some object representing all monsters as well as all the info on the players’ current state
get player states
if players are doing well
increase difficulty level
else if players are doing poorly
decrease difficulty level
//else if they are doing like we want them too, leave it the same
while(true)//we break out below
randomly select a monster to try to insert
if difficulty – monster’s difficulty value > 0//we can insert
add the monster to an array of monsters to spawn
difficulty = difficulty – monster’s difficulty
endif
if difficulty < our wiggle room value//we can spawn this
break;
end while

find a position off screen in whatever parameter zone we set (in front, to the sides, etc.)
foreach monster in the mob
select a spot near your position that’s a valid spawn location (i.e. not in a wall, etc.)
if it’s a valid position and nothing is currently in that spot
spawn the monster
else select a new spot and keep trying until you spawn the monster
{optional} start the monster moving towards the players’ current position
end foreach[/sourcecode]

Let me know in the comments if you guys have any thoughts on this. Is it good? Too basic? Tweak this and it’ll be better? Let me know!

Published by meatshield, on May 8th, 2011 at 2:16 pm. Filled under: UncategorizedNo Comments

WordPress?!

Ok, so my blog has once again changed. This is mainly because I have little-to-no free time coming up and I’m tired of having a half-finished product that was difficult to use and I didn’t have time to streamline it. I’m starting a new job in the middle of June and I just want something up here I can use and focus on the content more than the appearance. Plus I’m a terrible designer and these WordPress blogs are at least half decent, while my previous blog was admittedly poorly done!

I’ll be reuploading the only worthwhile post from the other site in a minute.

Published by meatshield, on May 8th, 2011 at 1:38 pm. Filled under: UncategorizedNo Comments