- They'll design a game and someone else will do all the work!
- It's all creativity instead of work.
- Ideas will just come to them, floating in out of the ether--and that one idea is all they need
- AAA list games can be produced easily--they have no idea of the magnitude involved.
- They'll play games all day in the job. (Even game magazine editors cannot do that.)
- It matters that they're expert game players (it only slightly matters, and only for designers)
- They're going to have a big effect on a AAA game soon after getting a job (they'll probably never have a big effect)
- Getting a degree is going to get them a job. (They have to show what they can do, degrees don't count for a lot yet.)
- If they just make a game that includes all the currently-popular elements (a market-driven game), theirs will be instantly popular.
- They're going to be able to assemble a development team without salaries and get things done on schedule with the promise of royalties once the game goes commercial. (Though at least this happens every once in a while.)
- They'll start their career working in the position they want to achieve in the long run.
- Think the college curriculum is an extension of high school and act as such.
- If they can do just what's in the curriculum, and without any additional effort, they will have 100% of what it takes to succeed.
- They will only work on hard core games, underestimating the amount of casual game players.
- Work will always be fun and they will always enjoy playing the game they create at the end.
- They can never make a "bad" game that gets canceled.
- Testing is only playing the game, not writing long reports on bugs and flaws.
- They can sneer at and ignore non-AAA titles as though there was something wrong with them and they'd never need to work on such a thing
- It will be Easy.
Saturday, December 22, 2007
Delusions of typical students starting game development curriculum
Here are some of the delusions common amongst beginning game development students. The teacher's job is to counter these delusions. It's better to be honest, to work from reality rather than encourage fantastic "dreams".
Thursday, December 20, 2007
Brief review, Understanding Comics
Understanding Comics by Scott McCloud.
Originally published 1992, this version 1993 (many printings since), purchased recently from Amazon $15.61.
This 216 page softcover "comic book" in largish trade paperback form is an exploration of comics as a distinct art form with its own conventions and possibilities, not merely as a combination of pictures and words. Some of our video game industry guest speakers for our Simulation and Game Development curriculum recommended it. Wanting to be an "educated person", I bought a copy and read it in bits over the course of a couple weeks.
I don't read comics nowadays, but I did when I was a kid and somewhat beyond (the first one I purchased, for 12 cents, was Spider Man #6--yeah, I sold it long ago). My brother collected comics quite seriously for many years. Comics are clearly still a big deal to young people, though often in the form of Japanese manga, which involve different conventions than American comics or European comics.
Anyone who is interested in drawing professionally should think about reading this book. It explains comics on their own terms, and using a well-drawn (mostly black and white) comic to do so helps make many things clear. It is fundamentally a work of...well, I'm not sure I can pin it down. I don't want to say philosophy, or art history, nor is it a "how to do it" book, but the treatment is absolutely serious (with occasional bits of humor thrown in). Though I cannot draw, it was an eye-opener for me, and should be for most people who do draw, whether they're interested in video games, or comics, or films, or something else. There's a lot more to drawing, and to comics, than the "kids stuff" that many people think, and this book illuminates all of that, also throwing light on some of those other media that involve drawing.
I can see why it is so widely recommended. Well worth reading if you have an interest in visually-related storytelling.
Originally published 1992, this version 1993 (many printings since), purchased recently from Amazon $15.61.
This 216 page softcover "comic book" in largish trade paperback form is an exploration of comics as a distinct art form with its own conventions and possibilities, not merely as a combination of pictures and words. Some of our video game industry guest speakers for our Simulation and Game Development curriculum recommended it. Wanting to be an "educated person", I bought a copy and read it in bits over the course of a couple weeks.
I don't read comics nowadays, but I did when I was a kid and somewhat beyond (the first one I purchased, for 12 cents, was Spider Man #6--yeah, I sold it long ago). My brother collected comics quite seriously for many years. Comics are clearly still a big deal to young people, though often in the form of Japanese manga, which involve different conventions than American comics or European comics.
Anyone who is interested in drawing professionally should think about reading this book. It explains comics on their own terms, and using a well-drawn (mostly black and white) comic to do so helps make many things clear. It is fundamentally a work of...well, I'm not sure I can pin it down. I don't want to say philosophy, or art history, nor is it a "how to do it" book, but the treatment is absolutely serious (with occasional bits of humor thrown in). Though I cannot draw, it was an eye-opener for me, and should be for most people who do draw, whether they're interested in video games, or comics, or films, or something else. There's a lot more to drawing, and to comics, than the "kids stuff" that many people think, and this book illuminates all of that, also throwing light on some of those other media that involve drawing.
I can see why it is so widely recommended. Well worth reading if you have an interest in visually-related storytelling.
Consoles
I am not a console game player--why, when I have a fine PC?--but I have been impressed with the ideas behind the Wii. The other day I had my first chance to actually play the Wii, which strongly confirmed my point of view.
I need to tell a story about my "worst prediction ever". After the crash of video gaming in the early 80s, I supposed that there would never be another video game console comparable in popularity to the Atari 2600. Because cheap computers, particularly the Commodore 64, could do everything a console could do and lots more, at a similar price. Obviously I was wrong, as the first Nintendo machine revived the genre.
What happened? I underestimated the buying public's fear of computers. In particular, I think parents buying game machines for their kids feared having to cope with a keyboard machine. And we have since seen a long succession of consoles dominate home gaming.
Nowadays, we have expensive consoles that are computer wannabes, the PS3 and the XBox360. Both are "frozen" technology, when compared with PCs. I still don't know why anyone would want to bother with an expensive computer wannabe that cannot be upgraded practically, when you can play on a much more versatile PC. Yes, the game software often isn't available for a PC; but in my particular case, the games I like to play (strategy wargames) are made for the PC to begin with.
What we have in the Wii is a throwback to the days when consoles were simple family fun, when people played consoles yet were afraid to deal with "complicated" computers. The new controllers allow gameplay that you just cannot have on a PC (or competing console) at present. The entire "ambience" of the Wii is that it's a fun thing to do with other people, not alone. That it's a game machine, not a technology machine. That it's for the casual player, not the hardcore type. My recent exposure at a party for a game club, with four playing at once, confirmed every one of those impressions.
So if I were going to buy a console, it would be a Wii, not a computer wannabe. (Though I can see buying a PS3 for the Blu-Ray, I'm not sufficiently into high definition movies to bother--upconversion from a progressive scan DVD is fine with me.)
My experience with game development students is that most of them are very hard core. A major task for the instructor is to convince them that they are not typical, and that they cannot plan to make games only for the hard core. The growth in games, in my estimation, will be in casual games, the downloadable games for PCs and games on such services as Xbox Live. And in simulations. NOT in AAA list games that are prominent in Best Buy or Circuit City. The Wii is selling as fast as Nintendo can make them, much faster than the competing consoles. There's a big market there, and game students need to be aware of it.
I need to tell a story about my "worst prediction ever". After the crash of video gaming in the early 80s, I supposed that there would never be another video game console comparable in popularity to the Atari 2600. Because cheap computers, particularly the Commodore 64, could do everything a console could do and lots more, at a similar price. Obviously I was wrong, as the first Nintendo machine revived the genre.
What happened? I underestimated the buying public's fear of computers. In particular, I think parents buying game machines for their kids feared having to cope with a keyboard machine. And we have since seen a long succession of consoles dominate home gaming.
Nowadays, we have expensive consoles that are computer wannabes, the PS3 and the XBox360. Both are "frozen" technology, when compared with PCs. I still don't know why anyone would want to bother with an expensive computer wannabe that cannot be upgraded practically, when you can play on a much more versatile PC. Yes, the game software often isn't available for a PC; but in my particular case, the games I like to play (strategy wargames) are made for the PC to begin with.
What we have in the Wii is a throwback to the days when consoles were simple family fun, when people played consoles yet were afraid to deal with "complicated" computers. The new controllers allow gameplay that you just cannot have on a PC (or competing console) at present. The entire "ambience" of the Wii is that it's a fun thing to do with other people, not alone. That it's a game machine, not a technology machine. That it's for the casual player, not the hardcore type. My recent exposure at a party for a game club, with four playing at once, confirmed every one of those impressions.
So if I were going to buy a console, it would be a Wii, not a computer wannabe. (Though I can see buying a PS3 for the Blu-Ray, I'm not sufficiently into high definition movies to bother--upconversion from a progressive scan DVD is fine with me.)
My experience with game development students is that most of them are very hard core. A major task for the instructor is to convince them that they are not typical, and that they cannot plan to make games only for the hard core. The growth in games, in my estimation, will be in casual games, the downloadable games for PCs and games on such services as Xbox Live. And in simulations. NOT in AAA list games that are prominent in Best Buy or Circuit City. The Wii is selling as fast as Nintendo can make them, much faster than the competing consoles. There's a big market there, and game students need to be aware of it.
Wednesday, December 19, 2007
An "unbalanced" though symmetric game
I've observed here that symmetric games are usually perfectly balanced, except that there may be an advantage in order of movement. This is why, in many of my symmetric games, I've tried to eliminate order of movement or at least associate it with some factor that players have a chance to control.
Chess, for example, is a symmetric game with a big advantage to first-mover (white). Other games may have an advantage for last-mover. When I playtest a symmetric game with a set turn order, I try to record the score by move order so that I can look for patterns of advantage.
Recently playing a four player Wii game involving Olympic events (I don't recall the name of the game), I saw a symmetric game that gave a big advantage to later movers. This is not so much inherent in the game as inherent in the situation, where none of the players had played before, and some had not played the Wii before. So as we played we had to figure out the different controls for each event, and how we could succeed. Those who played early in turn order were disadvantaged because they had not seen as many attempts by all the players as those who played later.
The solution would be to randomize turn order. So the player who goes first in the first round of an event, might go third in the next round, then second, and so forth. I'd suspect, though, that Nintendo would respond that this would confuse the players, so just go with the disadvantage.
Once the players are familiar with the event's controls, the advantage is still with those who go later, as they have some idea of how much they have to do to win the event, which tells them how much risk to take. Here I might decide that in each round after the first, the players play in order of the standings, so at least the last-mover would be the player in last place.
Chess, for example, is a symmetric game with a big advantage to first-mover (white). Other games may have an advantage for last-mover. When I playtest a symmetric game with a set turn order, I try to record the score by move order so that I can look for patterns of advantage.
Recently playing a four player Wii game involving Olympic events (I don't recall the name of the game), I saw a symmetric game that gave a big advantage to later movers. This is not so much inherent in the game as inherent in the situation, where none of the players had played before, and some had not played the Wii before. So as we played we had to figure out the different controls for each event, and how we could succeed. Those who played early in turn order were disadvantaged because they had not seen as many attempts by all the players as those who played later.
The solution would be to randomize turn order. So the player who goes first in the first round of an event, might go third in the next round, then second, and so forth. I'd suspect, though, that Nintendo would respond that this would confuse the players, so just go with the disadvantage.
Once the players are familiar with the event's controls, the advantage is still with those who go later, as they have some idea of how much they have to do to win the event, which tells them how much risk to take. Here I might decide that in each round after the first, the players play in order of the standings, so at least the last-mover would be the player in last place.
Sunday, December 16, 2007
A second game design class
In the community college system SGD curriculum there are two game design classes and two level design classes. In the first game design class, much effort is spent learning Gamemaker, so that students can produce simple electronic games.
The most recent syllabus for "Game Design Two" that I have seen states that students will make eight electronic games, two weeks per game, using different genres, and different student groups, during the class. In practice this apparently means the students will spend a great deal of time on games, probably get to a more-or-less working prototype, then move on to the next game.
This promotes the idea that the designer has succeeded when he arrives at a working prototype of a game. In fact, this is blatantly untrue. 80% of the designer's time is spent testing and altering the prototype in order to get a really good game from it (the prototype is NEVER a really good game). This is just another instance of the "80/20" rule, in this case the first 80% of the work (getting a working prototype) takes 20% of the designer's time, and getting it right (the last 20%) takes 80% of the time. (Perhaps in the electronic world the percentages are more 70/30 or even 60/40, but the point is nonetheless the same.)
This also encourages the idea that highly-derivative games--the ones practical to make in Gamemaker--are good games.
Moreover, this turns the class into a game-production exercise rather than a game design exercise. The students will spend most of their time worrying about getting graphics made (art, not design), about writing marketing documents that have nothing to do with actual design, and about producing a working prototype (which is programming, not design). They cannot concentrate on gameplay, the heart of design, nor do they have the time to adjust gameplay after testing, which is the mechanism that can make a game acceptably good.
So most of their time will be spent on subjects other than game design. Game design is in large part a practical skill ("10% inspiration, 90% perspiration"), something that requires efficient practice. That cannot happen in the "Eight Gamemaker games a term" format.
In comparison to time spent, students learn much more about game design from non-digital games than from digital ones. Gamemaker is a fine simple tool, but not something that will be used in the real world to make commercially-viable games. And even when students use Gamemaker, getting to a working prototype of an electronic game that amounts to anything takes a long time. Already in one Game Design One class, one group tried to make a game that Gamemaker Pro simply could not cope with. They tried many tricks, but found that the only hope they had was to write code that loaded and unloaded graphics and other auxiliaries, for which they had no time nor much expertise.
Finally, as one student also said, Gamemaker is not something you will ever put on your resume.
Consequently, in "Game Design Two" the objective should be to "finish" games or mods, more or less, and that means lots of time-consuming testing of prototypes.
This is not exactly "quality over quantity". This is a case of "completion" (a relative term!) rather than doing half a job. And completion is where the men are separated from the boys. I've already said this, but this time I'll quote from a gent who came from the game industry to teaching (Ian Schrieber): "One of the hardest things when dealing with students is to convince them to work on small games and complete them, rather than working on a single huge sprawling mess that dies under its own weight. It's also hard to convey the 80/20 rule, that 20% of the work gets you the first 80% of the game... but getting that last 20% of the game (which is the polish factor) takes a lot of time after the game already feels like it "should" be done, and pressing on when you're sick of working on the game already is what separates the developers from the wannabes."
(http://teachingdesign.blogspot.com/)
The testing generally will not occur in class, except insofar as the instructor wants to comment on what is happening in the game. The instructor will probably play all the games at some point during testing.
If students are to get jobs as designers, their practical path for classes is to make non-digital games, and to make mods of existing games. The non-digital games will teach them far more about game design and provide games for their portfolios, while the mods will give them experience to make further mods that might get them noticed.
Those who do not wish to be designers will still benefit from working through the entire process, understanding the entire process. And they'll have more time to polish their contributions, whether art, programming, sound, or something else.
So in this class I would have student groups alternate non-digital games and mods of electronic games, perhaps three non-digital and two electronic, perhaps even fewer depending on how this works out in practice. They would FINISH the games, as best can be done in that environment (which is to say, not really finished at all). I'd want a minimum 10 tests for non-digital games, and hours of testing and modification for electronic games. I'd require extensive documentation of testing and of the modifications resulting from testing. In other words I'd want to see clearly the progress of the students after they produce the initial playable prototype.
My first assignment, in such a class, would be to give students individually (or possibly in groups) the task to report on one game engine or moddable game. This report would be presented orally in class, to benefit all the other students. In other words, the students will help each other become familiar with the methods of producing simple electronic games or mods, so that they can decide what to pursue.
Then when the students do their electronic projects, the students will have to provide the game to be modded, or get the game engine to use, since the school won't have those resources and may not be able to install such on the computers being used in most Game Design Two classes.
The most recent syllabus for "Game Design Two" that I have seen states that students will make eight electronic games, two weeks per game, using different genres, and different student groups, during the class. In practice this apparently means the students will spend a great deal of time on games, probably get to a more-or-less working prototype, then move on to the next game.
This promotes the idea that the designer has succeeded when he arrives at a working prototype of a game. In fact, this is blatantly untrue. 80% of the designer's time is spent testing and altering the prototype in order to get a really good game from it (the prototype is NEVER a really good game). This is just another instance of the "80/20" rule, in this case the first 80% of the work (getting a working prototype) takes 20% of the designer's time, and getting it right (the last 20%) takes 80% of the time. (Perhaps in the electronic world the percentages are more 70/30 or even 60/40, but the point is nonetheless the same.)
This also encourages the idea that highly-derivative games--the ones practical to make in Gamemaker--are good games.
Moreover, this turns the class into a game-production exercise rather than a game design exercise. The students will spend most of their time worrying about getting graphics made (art, not design), about writing marketing documents that have nothing to do with actual design, and about producing a working prototype (which is programming, not design). They cannot concentrate on gameplay, the heart of design, nor do they have the time to adjust gameplay after testing, which is the mechanism that can make a game acceptably good.
So most of their time will be spent on subjects other than game design. Game design is in large part a practical skill ("10% inspiration, 90% perspiration"), something that requires efficient practice. That cannot happen in the "Eight Gamemaker games a term" format.
In comparison to time spent, students learn much more about game design from non-digital games than from digital ones. Gamemaker is a fine simple tool, but not something that will be used in the real world to make commercially-viable games. And even when students use Gamemaker, getting to a working prototype of an electronic game that amounts to anything takes a long time. Already in one Game Design One class, one group tried to make a game that Gamemaker Pro simply could not cope with. They tried many tricks, but found that the only hope they had was to write code that loaded and unloaded graphics and other auxiliaries, for which they had no time nor much expertise.
Finally, as one student also said, Gamemaker is not something you will ever put on your resume.
Consequently, in "Game Design Two" the objective should be to "finish" games or mods, more or less, and that means lots of time-consuming testing of prototypes.
This is not exactly "quality over quantity". This is a case of "completion" (a relative term!) rather than doing half a job. And completion is where the men are separated from the boys. I've already said this, but this time I'll quote from a gent who came from the game industry to teaching (Ian Schrieber): "One of the hardest things when dealing with students is to convince them to work on small games and complete them, rather than working on a single huge sprawling mess that dies under its own weight. It's also hard to convey the 80/20 rule, that 20% of the work gets you the first 80% of the game... but getting that last 20% of the game (which is the polish factor) takes a lot of time after the game already feels like it "should" be done, and pressing on when you're sick of working on the game already is what separates the developers from the wannabes."
(http://teachingdesign.blogspot.com/)
The testing generally will not occur in class, except insofar as the instructor wants to comment on what is happening in the game. The instructor will probably play all the games at some point during testing.
If students are to get jobs as designers, their practical path for classes is to make non-digital games, and to make mods of existing games. The non-digital games will teach them far more about game design and provide games for their portfolios, while the mods will give them experience to make further mods that might get them noticed.
Those who do not wish to be designers will still benefit from working through the entire process, understanding the entire process. And they'll have more time to polish their contributions, whether art, programming, sound, or something else.
So in this class I would have student groups alternate non-digital games and mods of electronic games, perhaps three non-digital and two electronic, perhaps even fewer depending on how this works out in practice. They would FINISH the games, as best can be done in that environment (which is to say, not really finished at all). I'd want a minimum 10 tests for non-digital games, and hours of testing and modification for electronic games. I'd require extensive documentation of testing and of the modifications resulting from testing. In other words I'd want to see clearly the progress of the students after they produce the initial playable prototype.
My first assignment, in such a class, would be to give students individually (or possibly in groups) the task to report on one game engine or moddable game. This report would be presented orally in class, to benefit all the other students. In other words, the students will help each other become familiar with the methods of producing simple electronic games or mods, so that they can decide what to pursue.
Then when the students do their electronic projects, the students will have to provide the game to be modded, or get the game engine to use, since the school won't have those resources and may not be able to install such on the computers being used in most Game Design Two classes.
Thursday, December 6, 2007
Results of non-electronic game projects
I assigned groups of three or four students (generally self-selected groups) the task of creating one non-electronic game and one electronic game. Among other things, I wanted them to see how much easier it is to produce a non-electronic prototype and be able to play it. I wanted them to see how important it is to play a game again and again, modify it again and again, to get a better product.
(There's a serious problem in the game design literature that implies that game design is writing up ideas, and once a prototype is produced--not a preliminary stand-alone prototype, but the prototype that becomes the finished game--the designer's work is just about over. In other words, there's a tendency to think that when the prototype works, you're nearly done. Even books that state clearly how much time is required to work with the playable prototype to achieve a good game, don't illustrate it in practice or in the amount of space devoted to these vital ideas.)
I tried to enforce milestones during many weeks of this process, but in the end I gave students the opportunity to fail. At one point I took each group out in the hallway, sat on the floor (their choice whether they did or not), and had them show me the prototype they had so far. For the "final" version, not final as in done, but final as in we've-run-out-of-time even though it isn't complete, I decided to have them "present" to the entire class, so people could see what others were doing.
Two of my classes are too large for everyone to "gather round a table" and watch, so in those classes the students had to stand up front. And it rapidly became clear that it is very hard to adequately describe a game without playing it, or at least setting it up. One group did produce a set of Powerpoint slides to help illustrate how the game worked, and another took digital photos of a playtest session.
A smaller class gathering around a table works better; then the "presenters" actually play a little of the game, which generates lots of questions. I can also see easily whether they've actually played it before, as those who have not run into lots of mechanics questions they had not thought of.
In general, the more practical games were the simple ones. Not only is this hardly surprising, it helps illustrate the point I'm trying to make, "keep it simple". My motto is "A designer knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away." (Antoine de Saint-Exup'ery)
The simple games were also easier to playtest multiple times, of course.
There were many analogs of electronic games, such as boardgame shooters. There were also analogs of card games, such as collectible/tradable card game and a couple of my own games that students had played in the game club. There was even a game that strongly resembled Candyland! Despite my discussion of the faults of traditional games (still be to posted here), some of the games started with the dreaded "roll and move" mechanic of Monopoly, though I did talk some students out of it. Certainly, where most students are not boardgame players and are just starting to design games, the games they produce are likely to be analogs of/derivative of other games.
Some of the games were not playtested, some were playtested a lot. I think those who did do playtesting recognized how important it was. In a few cases a group threw away their first effort after playtesting. I don't encourage this, but in some cases it was certainly justified.
It's very difficult for anyone to "grade" these games without playing them several times, for which there is no time. My main criterion, aside from what I can see about the gameplay, is whether the students playtested the games and benefitted from that.
I also give them a peer evaluation sheet to fill out. The idea is that I may find out which people in the group actually contributed most, or least. It has its flaws, but is better than nothing.
Later in "lecture" class we made a list of lessons learned from the non-digital game project:
• Playtesting really makes a difference
• You can’t know whether the game is much good until you play several times.
• Working in groups is tough
• It always takes more time than you think
• Miscommunication is common
• The last point is more technical, but important:
Symmetry in starting positions removes most worries about balance/fairness (you still must worry if a first-mover or even last-mover in a round has an advantage).
(There's a serious problem in the game design literature that implies that game design is writing up ideas, and once a prototype is produced--not a preliminary stand-alone prototype, but the prototype that becomes the finished game--the designer's work is just about over. In other words, there's a tendency to think that when the prototype works, you're nearly done. Even books that state clearly how much time is required to work with the playable prototype to achieve a good game, don't illustrate it in practice or in the amount of space devoted to these vital ideas.)
I tried to enforce milestones during many weeks of this process, but in the end I gave students the opportunity to fail. At one point I took each group out in the hallway, sat on the floor (their choice whether they did or not), and had them show me the prototype they had so far. For the "final" version, not final as in done, but final as in we've-run-out-of-time even though it isn't complete, I decided to have them "present" to the entire class, so people could see what others were doing.
Two of my classes are too large for everyone to "gather round a table" and watch, so in those classes the students had to stand up front. And it rapidly became clear that it is very hard to adequately describe a game without playing it, or at least setting it up. One group did produce a set of Powerpoint slides to help illustrate how the game worked, and another took digital photos of a playtest session.
A smaller class gathering around a table works better; then the "presenters" actually play a little of the game, which generates lots of questions. I can also see easily whether they've actually played it before, as those who have not run into lots of mechanics questions they had not thought of.
In general, the more practical games were the simple ones. Not only is this hardly surprising, it helps illustrate the point I'm trying to make, "keep it simple". My motto is "A designer knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away." (Antoine de Saint-Exup'ery)
The simple games were also easier to playtest multiple times, of course.
There were many analogs of electronic games, such as boardgame shooters. There were also analogs of card games, such as collectible/tradable card game and a couple of my own games that students had played in the game club. There was even a game that strongly resembled Candyland! Despite my discussion of the faults of traditional games (still be to posted here), some of the games started with the dreaded "roll and move" mechanic of Monopoly, though I did talk some students out of it. Certainly, where most students are not boardgame players and are just starting to design games, the games they produce are likely to be analogs of/derivative of other games.
Some of the games were not playtested, some were playtested a lot. I think those who did do playtesting recognized how important it was. In a few cases a group threw away their first effort after playtesting. I don't encourage this, but in some cases it was certainly justified.
It's very difficult for anyone to "grade" these games without playing them several times, for which there is no time. My main criterion, aside from what I can see about the gameplay, is whether the students playtested the games and benefitted from that.
I also give them a peer evaluation sheet to fill out. The idea is that I may find out which people in the group actually contributed most, or least. It has its flaws, but is better than nothing.
Later in "lecture" class we made a list of lessons learned from the non-digital game project:
• Playtesting really makes a difference
• You can’t know whether the game is much good until you play several times.
• Working in groups is tough
• It always takes more time than you think
• Miscommunication is common
• The last point is more technical, but important:
Symmetry in starting positions removes most worries about balance/fairness (you still must worry if a first-mover or even last-mover in a round has an advantage).
Tuesday, November 27, 2007
Victory conditions summary for boardgames/cardgames
For the benefit of my digital game students, I'm trying to summarize/categorize the many victory conditions available in games (especially board and card games).
Achieve a Position
Occupy a location--e.g. Stalingrad, Axis & Allies require occupation of certain cities
Occupy a lot of territory--go, Carcassone, Blokus, many others
Make a pattern of pieces--Tic-Tac-Toe, my Law & Chaos
Move off the other side of board (or the end of the track, as in race games)
There are many other variations...
Wipe out/destroy something
Wipe out everyone--checkers/draughts, Risk [this could be called "last survivor", too]
Take a piece (chess, the King)
Accumulate something or get rid of something (possibly all your assets)
$$$$ (Monopoly)
sets of cards (many card games)
use up all your cards (many card games)
Deduce/find answer
Clue/cluedo
if no deduction is required, this is a form of accumulate (as, sets)
Use up all your assets (be eliminated) either last, or first--can be seen as a form of accumulate something or get rid of something.
Scoring the most points at the end of a set time, or a set number of points, is very common (Settlers of Catan, Brittania), but this is an intermediate step to the achievement of some other goals--money, territory, whatever. Points are used when multiple victory conditions are wanted. For example, Britannia points include holding territory, temporarily occupying territory, killing enemy units, capturing certain locations, and more.
I am going to include "choose own objectives" separately. In the classic game Careers, players secretly allocate 60 points amongst Fame, Happiness, and Money. The first to achieve his objectives wins the game. While it is an "accumulate something" condition, the strategic variability provided by choice is exceptional and notable.
Finally, some games have "Missions" (newer editions of Risk). This is another form of points, that is, each mission is one of the other kinds of victory condition.
I don't consider sports to be a form of boardgame/cardgame, but even sports can be considered in these terms. For example, in baseball, you get points by achieving a position (getting around the diamond to home plate).
Lew Pulsipher
Achieve a Position
Occupy a location--e.g. Stalingrad, Axis & Allies require occupation of certain cities
Occupy a lot of territory--go, Carcassone, Blokus, many others
Make a pattern of pieces--Tic-Tac-Toe, my Law & Chaos
Move off the other side of board (or the end of the track, as in race games)
There are many other variations...
Wipe out/destroy something
Wipe out everyone--checkers/draughts, Risk [this could be called "last survivor", too]
Take a piece (chess, the King)
Accumulate something or get rid of something (possibly all your assets)
$$$$ (Monopoly)
sets of cards (many card games)
use up all your cards (many card games)
Deduce/find answer
Clue/cluedo
if no deduction is required, this is a form of accumulate (as, sets)
Use up all your assets (be eliminated) either last, or first--can be seen as a form of accumulate something or get rid of something.
Scoring the most points at the end of a set time, or a set number of points, is very common (Settlers of Catan, Brittania), but this is an intermediate step to the achievement of some other goals--money, territory, whatever. Points are used when multiple victory conditions are wanted. For example, Britannia points include holding territory, temporarily occupying territory, killing enemy units, capturing certain locations, and more.
I am going to include "choose own objectives" separately. In the classic game Careers, players secretly allocate 60 points amongst Fame, Happiness, and Money. The first to achieve his objectives wins the game. While it is an "accumulate something" condition, the strategic variability provided by choice is exceptional and notable.
Finally, some games have "Missions" (newer editions of Risk). This is another form of points, that is, each mission is one of the other kinds of victory condition.
I don't consider sports to be a form of boardgame/cardgame, but even sports can be considered in these terms. For example, in baseball, you get points by achieving a position (getting around the diamond to home plate).
Lew Pulsipher
Saturday, November 24, 2007
Checklist/reminder list for gameplay characteristics:
Non-electronic games should reveal the essence of design because they are likely to be simple. But all the comments below apply to electronic games as well.
This is, in a sense, a repeat of some of the things in the playtest notes on the blog, but I fear many have not read those.
1. What are the challenges the player(s) face?
2. What actions can they take to overcome those challenges?
3. What can players to do affect each other (if game for more than one player)
4. Is the game replayable many times without becoming "just the same" over and over?
5. Is the game fair?
6. Is there an appropriate mixture for the audience and game type (consider "take that")?
7. What is the "essence" of the game?
1., 2. and 3. Remember, the essence of gameplay is interesting non-trivial challenges and actions the players can take to meet those challenges. In non-electronic games, which usually involve more than one person, another very desirable element is player interaction, specifically, how can a player affect the other players? A good game is rarely "multiplayer solitaire", or a race where players have no influence on the fortunes of other players.
This amounts to, always ask yourself "what can the player do to influence the outcome of a game?"
4. Replayability. There are other considerations. For example, how replayable is the game? If it plays the same way over and over again, players will rapidly lose interest in it. See my separate piece on Replayability.
5. Fairness. Games should be fair. At some point, if a player feels he was gypped by the rules, he's not going to like the game. He or she should feel that he gets what he earned.
There's a particular mechanism that I'd lump into this category, the "roll a die and move that many" method used in Monopoly and many other traditional family games. I pose it to video game players like this: If you're playing an electronic game, and the maximum speed of your avatar varies periodically and randomly, aren't you going to hate that? That's what happens to a player in a roll-and-move game. And won't you hate it even worse if your opponent varies differently from you? At least, if all slow down at the same time, it's fair, but if one can move twice as fast as another, is that fair?
(I'm writing a separate piece on the flaws of traditional games, so I won't go beyond "roll and move".)
6. "Take that". The mixture of strategies and occurrences in a game must be appropriate to the audience. For example, party games should not require any heavy thinking!
In some games there are plays that pretty drastically change circumstances. These are called "take that" moves. (This often involves playing a card.) If you have a game with lots of "take that" occasions, people may enjoy it as a fun "beer and pretzels" thing, but they won't enjoy it as a strategic challenge. Conversely, if you are designing a strategic game, you probably should leave the "take that" stuff out. In other words, go one way or the other, a "take that" game or one that is not.
Where do you draw the line? Experience and playtesting with a variety of people will tell.
7. Finally, ask yourself, what is the essence of this game? What would characterize it in the minds of players or observers? Is this essence Good, is it desirable?
This is, in a sense, a repeat of some of the things in the playtest notes on the blog, but I fear many have not read those.
1. What are the challenges the player(s) face?
2. What actions can they take to overcome those challenges?
3. What can players to do affect each other (if game for more than one player)
4. Is the game replayable many times without becoming "just the same" over and over?
5. Is the game fair?
6. Is there an appropriate mixture for the audience and game type (consider "take that")?
7. What is the "essence" of the game?
1., 2. and 3. Remember, the essence of gameplay is interesting non-trivial challenges and actions the players can take to meet those challenges. In non-electronic games, which usually involve more than one person, another very desirable element is player interaction, specifically, how can a player affect the other players? A good game is rarely "multiplayer solitaire", or a race where players have no influence on the fortunes of other players.
This amounts to, always ask yourself "what can the player do to influence the outcome of a game?"
4. Replayability. There are other considerations. For example, how replayable is the game? If it plays the same way over and over again, players will rapidly lose interest in it. See my separate piece on Replayability.
5. Fairness. Games should be fair. At some point, if a player feels he was gypped by the rules, he's not going to like the game. He or she should feel that he gets what he earned.
There's a particular mechanism that I'd lump into this category, the "roll a die and move that many" method used in Monopoly and many other traditional family games. I pose it to video game players like this: If you're playing an electronic game, and the maximum speed of your avatar varies periodically and randomly, aren't you going to hate that? That's what happens to a player in a roll-and-move game. And won't you hate it even worse if your opponent varies differently from you? At least, if all slow down at the same time, it's fair, but if one can move twice as fast as another, is that fair?
(I'm writing a separate piece on the flaws of traditional games, so I won't go beyond "roll and move".)
6. "Take that". The mixture of strategies and occurrences in a game must be appropriate to the audience. For example, party games should not require any heavy thinking!
In some games there are plays that pretty drastically change circumstances. These are called "take that" moves. (This often involves playing a card.) If you have a game with lots of "take that" occasions, people may enjoy it as a fun "beer and pretzels" thing, but they won't enjoy it as a strategic challenge. Conversely, if you are designing a strategic game, you probably should leave the "take that" stuff out. In other words, go one way or the other, a "take that" game or one that is not.
Where do you draw the line? Experience and playtesting with a variety of people will tell.
7. Finally, ask yourself, what is the essence of this game? What would characterize it in the minds of players or observers? Is this essence Good, is it desirable?
Thursday, November 22, 2007
How to improve replayability in a game
While the "cult of the new" tends to mean that games aren't played many times before players move on to the next game, replayability is still a desirable feature of any game.
Most of the following amounts to "vary the experience", which of course is what provides replayabilty--varied experience:
• "Multiple paths to victory"
• Variable rather than set starting positions
• More than two players
• Asymmetric game
• Use of event cards
• Scenarios
• Optional rules
• Different sets of rules
• Hidden information
• Special abilities
"Multiple paths to victory" will result in much-improved replayability. Drawback: makes it much harder to balance the game
Variable rather than set starting positions (players choose their starting positions). A few games offer both options. Risk offers a random setup and a setup that lets players choose locations. The drawback: this lengthens the game.
More than two players (each player provides variability of himself). The drawback: lengthens the game.
Asymmetric game (standard starting position is not the same for all players). The drawback: makes it much harder to balance the game (i.e., give each player an equal chance of winning).
Use of event cards (especially in symmetric games or games without other chance factors). The drawback: can be seen to increase the influence of chance. But event cards often adds enjoyable color to the game as well.
Scenarios (which amount to differences in positions or victory conditions (or both)). Used primarily in historical games. The drawback: more time-consuming to design.
Optional rules. Again this seems most common in historical games. These are alternative ways to play the game. At some point, many rule choices in a game design are largely arbitrary, that is, one choice leads to just as interesting a game as the other choice, but the designer must choose one. The other can become an optional rule.
The drawback: virtually none, if the optional was tried sufficiently in playtesting.
Different sets of rules (for example Basic, Standard, and Advanced). The drawback: longer rules, and perhaps a feeling from some contemporary players that there's something wrong with the game because there's not "one way to play".
Hidden information. The game can diverge along many different paths when some information is hidden. Event Cards are an example of the use of hidden information, and electronic games typically enjoy the benefit, as the computer tracks the information much more easily than non-computer methods can. The drawback: something/someone has to track the hidden information, and in some cases, cheating may be possible.
Special Abilities. Cosmic Encounter thrives on the variety of special abilities for each side. Role-playing games typically include a vast number of skills, feats, spells, and classes, not all of which can be included in any single game or series of games. The drawback: play balance can suffer; and there's a lot of information to be devised and incorporated into the game.
Finally, people have suggested that, in general, the more chaos in a game, the more replayability it is likely to have. Even Go, which has none of the overt variation I've listed above, is highly replayable because a single move can change circumstances fairly strongly.
Another point of view is that when the number of reasonable choices is maximized, replayability is enhanced. But too many choices can also lead to "analysis paralysis".
Most of the following amounts to "vary the experience", which of course is what provides replayabilty--varied experience:
• "Multiple paths to victory"
• Variable rather than set starting positions
• More than two players
• Asymmetric game
• Use of event cards
• Scenarios
• Optional rules
• Different sets of rules
• Hidden information
• Special abilities
"Multiple paths to victory" will result in much-improved replayability. Drawback: makes it much harder to balance the game
Variable rather than set starting positions (players choose their starting positions). A few games offer both options. Risk offers a random setup and a setup that lets players choose locations. The drawback: this lengthens the game.
More than two players (each player provides variability of himself). The drawback: lengthens the game.
Asymmetric game (standard starting position is not the same for all players). The drawback: makes it much harder to balance the game (i.e., give each player an equal chance of winning).
Use of event cards (especially in symmetric games or games without other chance factors). The drawback: can be seen to increase the influence of chance. But event cards often adds enjoyable color to the game as well.
Scenarios (which amount to differences in positions or victory conditions (or both)). Used primarily in historical games. The drawback: more time-consuming to design.
Optional rules. Again this seems most common in historical games. These are alternative ways to play the game. At some point, many rule choices in a game design are largely arbitrary, that is, one choice leads to just as interesting a game as the other choice, but the designer must choose one. The other can become an optional rule.
The drawback: virtually none, if the optional was tried sufficiently in playtesting.
Different sets of rules (for example Basic, Standard, and Advanced). The drawback: longer rules, and perhaps a feeling from some contemporary players that there's something wrong with the game because there's not "one way to play".
Hidden information. The game can diverge along many different paths when some information is hidden. Event Cards are an example of the use of hidden information, and electronic games typically enjoy the benefit, as the computer tracks the information much more easily than non-computer methods can. The drawback: something/someone has to track the hidden information, and in some cases, cheating may be possible.
Special Abilities. Cosmic Encounter thrives on the variety of special abilities for each side. Role-playing games typically include a vast number of skills, feats, spells, and classes, not all of which can be included in any single game or series of games. The drawback: play balance can suffer; and there's a lot of information to be devised and incorporated into the game.
Finally, people have suggested that, in general, the more chaos in a game, the more replayability it is likely to have. Even Go, which has none of the overt variation I've listed above, is highly replayable because a single move can change circumstances fairly strongly.
Another point of view is that when the number of reasonable choices is maximized, replayability is enhanced. But too many choices can also lead to "analysis paralysis".
Community in a Game Development Curriculum
Millennials (those up to about 27 years of age) thrive on community (myspace, facebook, MMOs, etc.). They expect to work together, and to share experiences, even simple things like the high scores they get on video games. Furthermore, unlike older generations, millennials have no "presumption of virtue" about institutions. We older folk assume that a school exists to educate, or a hospital exists to cure people. Millennials suppose the school (or hospital, or government unit, or other institution) has hidden agendas, that education (or medicine, etc.) is not its primary purpose. Teachers and schools have to earn the trust of Millennials in the way we never did with Baby Boomers or Gen Xers. A lack of community, a lack of effort to build trust, inevitably contributes to poor education and poor retention rates.
Here are some things I try to do to foster community:
Game Club. It would be astonishing that a school that has a thriving game development program has no game club. I got students interested in one when I arrived at my current school, and we have meetings twice a week. Paperwork went in last week to become an officially recognized school club. This is one of the best venues for an instructor to get to know students, of course, and vice versa.
I've found that the best time to schedule a meeting is right after a class full of interested people. The situation would be different at a residential school rather than a community college.
Right now we meet in a classroom without computers, but half the students own laptops, so electronic games are played as well as boardgames (prototypes that I've designed) and collectible card games. We've also met in the lab that is devoted entirely to game development, but it is usually occupied by classes.
Bulletin board. You need a large (4 by 8 feet) *unlocked* bulletin board outside the most commonly used classroom, where students can communicate with one another and faculty can communicate with them.
Listserv. Require all the students to sign up for something like Yahoo Groups (which is what I use). It is sometimes amazingly difficult to get students to actually sign up, even via invitation. Ours is an announcement-only listserv, so there is little traffic compared to a discussion listserv. Some faculty think that posting notices to Blackboard serves this purpose, but that is awkward and unreliable, but most of all, many students never read Blackboard.
Web site. Here teachers are severely constrained by school support, and that support appears to be quite poor in some cases. Often the site is out of date and dysfunctional. The listserv Web site can be an alternative up to a point. And I maintain my own site that is primarily links (school rules prevent more).
Surveys. I use Surveymonkey's free service to create surveys, then show students the results. The free version can only have 10 questions and 100 respondents per survey, but that works OK for my situation (about 100 new students this year). Some questions are just curiosity, many are related to recruitment and retention.
Blog. I like to run a blog for each class that students must check every day. This is much more intimate and immediate than Blackboard (which is poor software in any case that has nothing to do with the real world). School rules may prevent a blog for a specific class that can be seen by people not members of that class. Many schools do not support limited-access blogs. Nonetheless, I maintain this blog, and the general game design blog I've written for some years, and ask students to read this one at least.
Forum. Some people think an online Forum such as one implemented with phpbb is excellent, others think it isn't much help. To me, current information gets lost in the threads, compared with the listserv; but it may be easier for students to find information about common topics in a forum. I have not started one.
School support. Some schools encourage student communication, others don't. As an example, at many schools, if students want to publicize a new club, it is easy to put up some signs and in other ways let people know about it. Yet at other schools there are no such avenues. For example, the bulletin board in the student lounge requires permission from the Dean of Students before you can put a notice on it. Only small bulletin boards exist at doorways, and these are "official" and locked.
Impressions. The impression conveyed by some schools is that they want all the students to shut up, do what they're told, and get out of the way when they're not in class. Yet this doesn't work with millennials. If millennials don't feel that they're a part of something, they're much more likely to quit. The building where the game development classes are held, for example, must provide the right signals. If it reminds one of a prison rather than of a pleasant place to learn, how will students react? It cannot be "antiseptic" and "faceless"; it should be warm and inviting, "homey".
I'd say this is all common sense, but there are many, many schools that do not support, or even prevent, some of this from occurring.
Here are some things I try to do to foster community:
Game Club. It would be astonishing that a school that has a thriving game development program has no game club. I got students interested in one when I arrived at my current school, and we have meetings twice a week. Paperwork went in last week to become an officially recognized school club. This is one of the best venues for an instructor to get to know students, of course, and vice versa.
I've found that the best time to schedule a meeting is right after a class full of interested people. The situation would be different at a residential school rather than a community college.
Right now we meet in a classroom without computers, but half the students own laptops, so electronic games are played as well as boardgames (prototypes that I've designed) and collectible card games. We've also met in the lab that is devoted entirely to game development, but it is usually occupied by classes.
Bulletin board. You need a large (4 by 8 feet) *unlocked* bulletin board outside the most commonly used classroom, where students can communicate with one another and faculty can communicate with them.
Listserv. Require all the students to sign up for something like Yahoo Groups (which is what I use). It is sometimes amazingly difficult to get students to actually sign up, even via invitation. Ours is an announcement-only listserv, so there is little traffic compared to a discussion listserv. Some faculty think that posting notices to Blackboard serves this purpose, but that is awkward and unreliable, but most of all, many students never read Blackboard.
Web site. Here teachers are severely constrained by school support, and that support appears to be quite poor in some cases. Often the site is out of date and dysfunctional. The listserv Web site can be an alternative up to a point. And I maintain my own site that is primarily links (school rules prevent more).
Surveys. I use Surveymonkey's free service to create surveys, then show students the results. The free version can only have 10 questions and 100 respondents per survey, but that works OK for my situation (about 100 new students this year). Some questions are just curiosity, many are related to recruitment and retention.
Blog. I like to run a blog for each class that students must check every day. This is much more intimate and immediate than Blackboard (which is poor software in any case that has nothing to do with the real world). School rules may prevent a blog for a specific class that can be seen by people not members of that class. Many schools do not support limited-access blogs. Nonetheless, I maintain this blog, and the general game design blog I've written for some years, and ask students to read this one at least.
Forum. Some people think an online Forum such as one implemented with phpbb is excellent, others think it isn't much help. To me, current information gets lost in the threads, compared with the listserv; but it may be easier for students to find information about common topics in a forum. I have not started one.
School support. Some schools encourage student communication, others don't. As an example, at many schools, if students want to publicize a new club, it is easy to put up some signs and in other ways let people know about it. Yet at other schools there are no such avenues. For example, the bulletin board in the student lounge requires permission from the Dean of Students before you can put a notice on it. Only small bulletin boards exist at doorways, and these are "official" and locked.
Impressions. The impression conveyed by some schools is that they want all the students to shut up, do what they're told, and get out of the way when they're not in class. Yet this doesn't work with millennials. If millennials don't feel that they're a part of something, they're much more likely to quit. The building where the game development classes are held, for example, must provide the right signals. If it reminds one of a prison rather than of a pleasant place to learn, how will students react? It cannot be "antiseptic" and "faceless"; it should be warm and inviting, "homey".
I'd say this is all common sense, but there are many, many schools that do not support, or even prevent, some of this from occurring.
Wednesday, November 21, 2007
Some additional notes about multiplayer games
In a multiplayer boardgame or card game, the focus is on who (which player) you're going against, not on how you're getting there (maneuver). In a two player game, the focus is on how you're getting there, not on who you're going against, because there is no choice of the latter (you have only one).
In general, in non-electronic games, in multiplayer games you're playing the player much more than the "system". In electronic games, even multiplayer, you're playing the system first, then the other players. You can't "look them in the eye", you can't see body language. Yes, you can use Skype or some built-in system to talk to your opponents, but you may not KNOW them, and you won't see them. It makes a difference.
Do people who play as opponents in online multiplayer electronic games become friends? I'm not talking about co-operative games like Everquest, where they're in the same party/guild. I think the answer is no. Do players of multiplayer non-digital games face to face become friends? Often, if they aren't friends already.
In general, in non-electronic games, in multiplayer games you're playing the player much more than the "system". In electronic games, even multiplayer, you're playing the system first, then the other players. You can't "look them in the eye", you can't see body language. Yes, you can use Skype or some built-in system to talk to your opponents, but you may not KNOW them, and you won't see them. It makes a difference.
Do people who play as opponents in online multiplayer electronic games become friends? I'm not talking about co-operative games like Everquest, where they're in the same party/guild. I think the answer is no. Do players of multiplayer non-digital games face to face become friends? Often, if they aren't friends already.
Tuesday, November 20, 2007
Practice Makes Perfect--You don't start out as an expert practitioner
Students typically come into a game development curriculum with many delusions. One (well, really three) is that they're going to have one great idea, quickly turn it into a game, and then bask in adoration.
A subset of these delusions, which I want to concentrate on here, is that the first game they'll make will be excellent ("awesome" is the usual word I hear). It's hard to make students realize that initial failure in game design must be expected, just as in other walks of life.
So they make their first game, or game concept. It's usually terribly derivative of other games. If (as they should be) they're required to design non-electronic games, the result is even worse, because they derive their ideas from Monopoly and worse games, the "Ameritrash" that so annoys "real boardgamers" nowadays.
In the "Age of Instant Gratification" young people just don't understand the requirement to learn how to do something well before you can become really good at it. They're shocked to find out that they've produced wholly inadequate stuff. They've been patted on the back for years in K12 for doing next to nothing, because (with exceptions) K12 is all about false esteem rather than capability.
Here are some examples I describe to students from well-known practitioners that illustrate the time it takes to learn a craft:
John Creasey, in about 65 years of life, published over 600 books, mostly mysteries. I once read that he received OVER 700 REJECTIONS (presumably mostly for short works) before he sold any writing. He had to learn how to write well, yet look where he went in the end.
Jerry Pournelle, a well-known science fiction and technology writer (two Ph.D.s) says that if you're willing to throw away your first million words, you can become a novelist. In other words, until you learn your craft, what you're writing--that's the equivalent of at least ten normal novels--won't be worth publishing.
Even good stuff gets rejected by publishers. The Lord of the Rings was rejected by publishers. My game Britannia was rejected by the American publishers, who only published it after it was published in Britain! These were products that proved in the end to be quite viable (Brit isn't on the same level of LOTR, of course), yet they still got "thumbs down".
Of course there are exceptions. J. K. Rowling of "Harry Potter" comes to mind. Though she was much older than our typical student.
How did I practice? I designed variants (often amounting to new games) of a game called Diplomacy for years, and made adventures and rules modifications for the paper version of Dungeons and Dragons, long before I designed commercially-viable stand-alone games.
When students find out how much work is required to improve and then polish a game before it can have a chance to be commercially successful, many opt for another, "easier" career. The following quote (about books) from one of the giants of the 20th century illustrates what happens with games as well:
"Writing a book is an adventure. To begin with, it is a toy and an amusement; then it becomes a mistress, and then it becomes a master, and then a tyrant. The last phase is that just as you are about to be reconciled to your servitude, you kill the monster, and fling him out to the public." --Sir Winston Churchill
Those who cannot get through these later phases will never be successful in the game design business.
A subset of these delusions, which I want to concentrate on here, is that the first game they'll make will be excellent ("awesome" is the usual word I hear). It's hard to make students realize that initial failure in game design must be expected, just as in other walks of life.
So they make their first game, or game concept. It's usually terribly derivative of other games. If (as they should be) they're required to design non-electronic games, the result is even worse, because they derive their ideas from Monopoly and worse games, the "Ameritrash" that so annoys "real boardgamers" nowadays.
In the "Age of Instant Gratification" young people just don't understand the requirement to learn how to do something well before you can become really good at it. They're shocked to find out that they've produced wholly inadequate stuff. They've been patted on the back for years in K12 for doing next to nothing, because (with exceptions) K12 is all about false esteem rather than capability.
Here are some examples I describe to students from well-known practitioners that illustrate the time it takes to learn a craft:
John Creasey, in about 65 years of life, published over 600 books, mostly mysteries. I once read that he received OVER 700 REJECTIONS (presumably mostly for short works) before he sold any writing. He had to learn how to write well, yet look where he went in the end.
Jerry Pournelle, a well-known science fiction and technology writer (two Ph.D.s) says that if you're willing to throw away your first million words, you can become a novelist. In other words, until you learn your craft, what you're writing--that's the equivalent of at least ten normal novels--won't be worth publishing.
Even good stuff gets rejected by publishers. The Lord of the Rings was rejected by publishers. My game Britannia was rejected by the American publishers, who only published it after it was published in Britain! These were products that proved in the end to be quite viable (Brit isn't on the same level of LOTR, of course), yet they still got "thumbs down".
Of course there are exceptions. J. K. Rowling of "Harry Potter" comes to mind. Though she was much older than our typical student.
How did I practice? I designed variants (often amounting to new games) of a game called Diplomacy for years, and made adventures and rules modifications for the paper version of Dungeons and Dragons, long before I designed commercially-viable stand-alone games.
When students find out how much work is required to improve and then polish a game before it can have a chance to be commercially successful, many opt for another, "easier" career. The following quote (about books) from one of the giants of the 20th century illustrates what happens with games as well:
"Writing a book is an adventure. To begin with, it is a toy and an amusement; then it becomes a mistress, and then it becomes a master, and then a tyrant. The last phase is that just as you are about to be reconciled to your servitude, you kill the monster, and fling him out to the public." --Sir Winston Churchill
Those who cannot get through these later phases will never be successful in the game design business.
Sunday, November 18, 2007
Design Flaws to Watch out for in Multiplayer Games
Digital game development students aren't used to thinking about the consequences of games involving more than two opposing interests, because most electronic games include only two sides, one often a machine opponent. Several problems named by boardgamers can occur when there are three or more sides in a game. Many of these are much more likely to occur when the victory condition amounts to "wipe out the opposition":
• Turtling
• Leader bashing
• Sandbagging
• Kingmaking (petty diplomacy problem)
Turtling occurs when a player sits back and builds up strength while others expend theirs. This can often be seen in multi-player online RTS games. When there are more than two sides, a player can hang back, building up bases and technology, while he lets other players slaughter one another's forces. Then he comes out and cleans up the remainder.
A general solution is to use a different victory condition. E.g., capture of certain locations as the means of victory forces players to come out of their shells. Giving points for destoying the opposition also encourages aggression rather than turtling.
Another solution is to provide economic incentives to be aggressive. This often involves capturing economically valuable areas, so that a successful aggressive player can build up forces faster than the turtle.
Leader bashing tends to happen in games without much hidden information, that is, it must be clear who the leader is. Then the other players gang up on the leader. ("Of course", many would say, why wouldn't one try to weaken the leader?) If it isn't clear who the leader is, this is less likely to occur. If it is hard for some players, at least, to affect the leader in any given situation, then there will be less leader bashing, as those players will distract the ones who can affect the leader.
Sandbagging is often a consequence of leader bashing. A player will try to get himself in second or third place, rather than first, so that when the first place player is bashed, the sandbagger can swoop in for the win. Timing, obviously, is quite important here.
The solution to sandbagging is to reduce leader-bashing to a reasonable level.
Kingmaking is a consequence of what R. Wayne Schmittberger calls the "petty diplomacy problem". Where there are three interests, and one recognizes that they/he cannot win the game, that loser may be able to determine which of the other two wins. Even if the game is being played by more than three, it will often come down to three major interests. More generally, if a losing player can determine who wins, you have kingmaking in play.
One way to avoid this is to structure the game so that a player cannot be sure he is going to lose until it's too late for him to become a kingmaker. Of course, some players believe kingmaking is the "wrong way to play", that every player should try to win no matter what. But designers cannot rely on players to be self-governing in this way.
Another way to avoid kingmaking is to make it too hard for a player to use all his capability against another to prevent that other from winning. As a simple example, in a race it's usually hard for a losing player to have much effect on the leading players.
Now here are some alternatives to a victory condition of "kill everyone else". These help mitigate some of the problems we've been discussing. These are:
• economies (especially zero-sum)
• points
• missions
Economies. Players receive more assets as the game progresses, in accordance with some rules relating to locations or resources, not merely to a table of additional appearances. If a player plays well, he will earn more new assets than if he plays badly.
In a zero-sum game, each player's gain is another player's loss. The classic game Diplomacy is the best example of this. There are 34 "supply center" locations on the board. A player gets one unit (army or fleet) per center. If a player takes another's center, the first is going to increase his forces, while the second will lose forces, at the next building period.
Points. Players earn points for certain events or achievements. This could be capture of certain locations, destruction of enemy assets, holding certain places at given times, and so forth. In a wargame, a player could be wiped out, yet if he's done enough beforehand he can still have the most points to win the game. In general, where points are concerned the game does not continue until all but one player is wiped out. Either there will be a time limit or a point limit.
E.g., in my "light wargame" Britannia, players receive points for holding areas, occupying areas during a certain period, for dominating regions (king of England), for forcing nations to submit, and even for killing enemy units. A nation may be wiped out in the course of the game, but each player controls several, and the points that defunct nation earned still count. Points are based on historical performance, and are accumulated at different paces, so the current score is not a good gauge of who is actually winning the game.
Missions. This is a form of points because the mission involves completion of particular goals, but when a mission is completed the game is over, so no point record is needed. A mission can be as simple as capturing certain cities, or much more complex. Occasionally the missions are hidden, that is, you don't know which mission your opponent is trying to fulfill.
Now let's take Risk as an example. Risk is not a particularly good game, but a great many people have played it, and it exhibits most of our design flaws.
In Risk the object is to completely wipe out all competition. It uses economy to try to avoid the four problems. You get extra armies at the start of your turn if you hold an entire continent, to provide an economic incentive to attack. There is also card acquisition: you must take a territory in a turn in order to get a card, and matched sets of three cards gain you large numbers of armies. You also get armies according to the number of territories you hold. If you turtle or sandbag you get fewer new armies than your competitors. In fact, it's typical for players to attack as much as they can until they're out of spare armies, in order to limit how many territories their opponents control (and consequently how many new armies the opponents get).
There is certainly leader-bashing, but some players may not have forces near enough to the leader to do any damage. You are often better off wiping out a weak power rather than attacking the strongest, because when you wipe out an opponent, you get his cards, and if you can make another set you get more armies (in increasing numbers) with which to immediately continue attacking.
Kingmaking is also quite limited, as by the time a player realizes he's a goner, he doesn't have enough force to do much damage to one of the leaders.
Despite all this, a couple decades after the original English edition of Risk was published, "Mission Cards" were added to the mix. Each player receives one with a mission unknown to his opponents. A mission might be something like "Control Asia" (the largest continent). Hence a player can win the game, by completing his mission, long before he wipes out all opposition. Unfortunately, the mission cards aren't modified by the number of players, so some may be much easier to achieve than others in certain situations.
(Another well-known board wargame, Axis&Allies, is two sides even when there are five players (Germany and Japan on one side, Britain, US, and Russia on the other), hence not subject to these problems.)
• Turtling
• Leader bashing
• Sandbagging
• Kingmaking (petty diplomacy problem)
Turtling occurs when a player sits back and builds up strength while others expend theirs. This can often be seen in multi-player online RTS games. When there are more than two sides, a player can hang back, building up bases and technology, while he lets other players slaughter one another's forces. Then he comes out and cleans up the remainder.
A general solution is to use a different victory condition. E.g., capture of certain locations as the means of victory forces players to come out of their shells. Giving points for destoying the opposition also encourages aggression rather than turtling.
Another solution is to provide economic incentives to be aggressive. This often involves capturing economically valuable areas, so that a successful aggressive player can build up forces faster than the turtle.
Leader bashing tends to happen in games without much hidden information, that is, it must be clear who the leader is. Then the other players gang up on the leader. ("Of course", many would say, why wouldn't one try to weaken the leader?) If it isn't clear who the leader is, this is less likely to occur. If it is hard for some players, at least, to affect the leader in any given situation, then there will be less leader bashing, as those players will distract the ones who can affect the leader.
Sandbagging is often a consequence of leader bashing. A player will try to get himself in second or third place, rather than first, so that when the first place player is bashed, the sandbagger can swoop in for the win. Timing, obviously, is quite important here.
The solution to sandbagging is to reduce leader-bashing to a reasonable level.
Kingmaking is a consequence of what R. Wayne Schmittberger calls the "petty diplomacy problem". Where there are three interests, and one recognizes that they/he cannot win the game, that loser may be able to determine which of the other two wins. Even if the game is being played by more than three, it will often come down to three major interests. More generally, if a losing player can determine who wins, you have kingmaking in play.
One way to avoid this is to structure the game so that a player cannot be sure he is going to lose until it's too late for him to become a kingmaker. Of course, some players believe kingmaking is the "wrong way to play", that every player should try to win no matter what. But designers cannot rely on players to be self-governing in this way.
Another way to avoid kingmaking is to make it too hard for a player to use all his capability against another to prevent that other from winning. As a simple example, in a race it's usually hard for a losing player to have much effect on the leading players.
Now here are some alternatives to a victory condition of "kill everyone else". These help mitigate some of the problems we've been discussing. These are:
• economies (especially zero-sum)
• points
• missions
Economies. Players receive more assets as the game progresses, in accordance with some rules relating to locations or resources, not merely to a table of additional appearances. If a player plays well, he will earn more new assets than if he plays badly.
In a zero-sum game, each player's gain is another player's loss. The classic game Diplomacy is the best example of this. There are 34 "supply center" locations on the board. A player gets one unit (army or fleet) per center. If a player takes another's center, the first is going to increase his forces, while the second will lose forces, at the next building period.
Points. Players earn points for certain events or achievements. This could be capture of certain locations, destruction of enemy assets, holding certain places at given times, and so forth. In a wargame, a player could be wiped out, yet if he's done enough beforehand he can still have the most points to win the game. In general, where points are concerned the game does not continue until all but one player is wiped out. Either there will be a time limit or a point limit.
E.g., in my "light wargame" Britannia, players receive points for holding areas, occupying areas during a certain period, for dominating regions (king of England), for forcing nations to submit, and even for killing enemy units. A nation may be wiped out in the course of the game, but each player controls several, and the points that defunct nation earned still count. Points are based on historical performance, and are accumulated at different paces, so the current score is not a good gauge of who is actually winning the game.
Missions. This is a form of points because the mission involves completion of particular goals, but when a mission is completed the game is over, so no point record is needed. A mission can be as simple as capturing certain cities, or much more complex. Occasionally the missions are hidden, that is, you don't know which mission your opponent is trying to fulfill.
Now let's take Risk as an example. Risk is not a particularly good game, but a great many people have played it, and it exhibits most of our design flaws.
In Risk the object is to completely wipe out all competition. It uses economy to try to avoid the four problems. You get extra armies at the start of your turn if you hold an entire continent, to provide an economic incentive to attack. There is also card acquisition: you must take a territory in a turn in order to get a card, and matched sets of three cards gain you large numbers of armies. You also get armies according to the number of territories you hold. If you turtle or sandbag you get fewer new armies than your competitors. In fact, it's typical for players to attack as much as they can until they're out of spare armies, in order to limit how many territories their opponents control (and consequently how many new armies the opponents get).
There is certainly leader-bashing, but some players may not have forces near enough to the leader to do any damage. You are often better off wiping out a weak power rather than attacking the strongest, because when you wipe out an opponent, you get his cards, and if you can make another set you get more armies (in increasing numbers) with which to immediately continue attacking.
Kingmaking is also quite limited, as by the time a player realizes he's a goner, he doesn't have enough force to do much damage to one of the leaders.
Despite all this, a couple decades after the original English edition of Risk was published, "Mission Cards" were added to the mix. Each player receives one with a mission unknown to his opponents. A mission might be something like "Control Asia" (the largest continent). Hence a player can win the game, by completing his mission, long before he wipes out all opposition. Unfortunately, the mission cards aren't modified by the number of players, so some may be much easier to achieve than others in certain situations.
(Another well-known board wargame, Axis&Allies, is two sides even when there are five players (Germany and Japan on one side, Britain, US, and Russia on the other), hence not subject to these problems.)
Saturday, November 17, 2007
What makes a game great?
In some classes we tried to make a brief list of what makes a game great. I have my own ideas (which I briefly discussed in my contribution to Hobby Games: the 100 Best), and I intend to write a separate article about that, so I'll just list what we came up with in class in no particular order.
Innovation--it's good to be first
Story
Improving realism (e.g. destructible environments Red Faction) (bullet time) (scars) (camera movement) etc.
Really good gameplay (playability)
PvP, team play--something other than person vs. computer
Variety of modes of play (online, pvp, MMO)
Replayability
Customizable
Verisimilitude of the environment--when you’re there, it feels like you’re really there
Immersive
The music
Your character can grow and change (skills, attributes)
Non-linear gameplay
Good interface and tight game controls
Community
Innovation--it's good to be first
Story
Improving realism (e.g. destructible environments Red Faction) (bullet time) (scars) (camera movement) etc.
Really good gameplay (playability)
PvP, team play--something other than person vs. computer
Variety of modes of play (online, pvp, MMO)
Replayability
Customizable
Verisimilitude of the environment--when you’re there, it feels like you’re really there
Immersive
The music
Your character can grow and change (skills, attributes)
Non-linear gameplay
Good interface and tight game controls
Community
Tuesday, November 13, 2007
History and History Classes
All of our game development students must take a history class or two. This requirement comes directly from the local game developers and manufacturers, who want their employees to know something more about history than the average person. This is of particular interest to me insofar as I have a Ph.D. in history.
I'm told that the history classes at my school have proved to be particularly hard for the SGD students, if we can judge from the grades they get. I'm afraid an awful lot depends on the teacher in cases like this: some history teachers think history is a succession of dates and events, whereas history is really a story (the word "story" is in the word "history"). It's a story of how individuals and groups coped with the challenges and opportunities of their time and their physical location. This is why history fascinated me when I was young, and I always wanted to know "what happens next", the same way I do when I read a fictional story.
In a sense, studying history is like watching a gigantic game: games consist of challenges and the actions players take to overcome them, history consists of challenges and the actions people try to take to try to cope with those challenges.
Many games contain stories, and some people (including me) like to play games partly to see "what happens next". The difference, in history, is that "it really happened", it's not made up.
Young people tend to be very uninterested in anything that happened before they were born. For those, I hope they think of history as a kind of (serious) game, and maybe that will make it more interesting. What could people have done differently, how could they have "improved their score"?
I hope students will have teachers who teach history as stories, but if not, please try to treat it as stories and "grin and bear it". Don't let a history class mess you up, because history isn't inherently difficult.
I'm told that the history classes at my school have proved to be particularly hard for the SGD students, if we can judge from the grades they get. I'm afraid an awful lot depends on the teacher in cases like this: some history teachers think history is a succession of dates and events, whereas history is really a story (the word "story" is in the word "history"). It's a story of how individuals and groups coped with the challenges and opportunities of their time and their physical location. This is why history fascinated me when I was young, and I always wanted to know "what happens next", the same way I do when I read a fictional story.
In a sense, studying history is like watching a gigantic game: games consist of challenges and the actions players take to overcome them, history consists of challenges and the actions people try to take to try to cope with those challenges.
Many games contain stories, and some people (including me) like to play games partly to see "what happens next". The difference, in history, is that "it really happened", it's not made up.
Young people tend to be very uninterested in anything that happened before they were born. For those, I hope they think of history as a kind of (serious) game, and maybe that will make it more interesting. What could people have done differently, how could they have "improved their score"?
I hope students will have teachers who teach history as stories, but if not, please try to treat it as stories and "grin and bear it". Don't let a history class mess you up, because history isn't inherently difficult.
Saturday, November 10, 2007
Where Degrees Matter
Sometimes big companies--game companies are rarely big companies--will have a hard-and-fast degree requirement. Why? Because they leave hiring to the Human Resources department (usually a big mistake), and the HR people frequently don't have a clue about the job they're hiring for. Their main interest is simplifying their task by setting absolute rules. An easy way to do this is to require a certain degree. This puts off many potential applicants, and enables the HR person to easily weed out other applications because the required degree isn't there.
But you don't want to work for a company like that, not if you're creative and imaginative and self-motivated. When HR takes over hiring, you have a company that's already crippled. (Yes, a great many companies are crippled. And it shows.)
But you don't want to work for a company like that, not if you're creative and imaginative and self-motivated. When HR takes over hiring, you have a company that's already crippled. (Yes, a great many companies are crippled. And it shows.)
Friday, November 9, 2007
The Virtues of Cards in "boardgames"
I am going to try to summarize the virtues of using cards in boardgames--or perhaps in boardgame-card hybrids. I'm doing this primarily for the benefit of my students, but I thought it might be worth contributing to others as well.
Summary:
• Cards reduce the need to read rules before playing ("put the rules on the cards")
• Cards are a simple way to add color and visual interest to a game (as opposed to expensive 3D sculptured pieces)
• Cards provide a simple and clean way to add "chrome" to a game (chrome usually involves rule exceptions)
• Cards provide a convenient way to "inventory" game information
• Cards are an easy way to increase variety in a game (which usually increases replayability as well)
• Cards can be used as a substitute "board"
• Cards are a more acceptable means of introducing variation through chance, as many people now dislike dice rolling
Cards reduce the need to read rules before playing
"Put the rules on the cards". This is the easiest way to simplify the difficulties of learning a game, especially for those teaching it to others. A player only needs to consider/understand the card-rules when they hold or draw the card. Well-known collectible card game designers introduced me to the "seven line rule": players won't read more than seven lines of rules on a card, so don't put more on them. For millennials the rule is certainly "the less text the better".
This is also a good way to reduce the size of the rulebook. Big rulebooks are daunting even if the game itself is fairly simple.
Cards are a simple way to add color and visual interest to a game
Cards often host attractive color graphics, much larger than you can put on tiles or counters. They are cheaper than sculptured three dimensional figures. 3D figures are seldom multi-colored, too.
Cards provide a simple and clean way to add "chrome" to a game
"Chrome" is the term for special rules that often reflect special historical or personal circumstances. Hence chrome usually involves rule exceptions. And where "chrome" includes a visual, a card is the best way to illustrate/introduce it. This relates also to the first point, putting rules on the cards rather than in the rulebook.
If I designed Britannia today I might include cards to add "chrome" to the game. A variant using "Nation Specialty Cards" already exists (my design, not released).
Cards provide a convenient way to "inventory" information
When players need to keep track of what items or spells or capabilities they possess, cards are an excellent choice. They're familiar, easy to organize, and have both text and graphics. For example, spells are tracked in EL:the Card Game (see below) via cards.
Cards are an easy way to increase variety in a game
"Event cards" are quite common in games these days. Lots of different scnearios/situations can be introduced in a small deck of cards.
The variety of the cards usually increases replayability as well. More possibilities equals more paths that the game can follow. Players can play many times and still be able to say "I never saw that happen before".
Cards can be used as a substitute "board"
I've devised several prototype games that use cards in place of a board. From a commercial point of view, this results in a much less expensive package that is easier to ship and to find shelfspace for.
In Battle of Hastings some of the cards represent Saxon and Norman units; the play area is so crowded until late in the game that the cards can be arranged in a 7 by 6 array of "spaces", though I also have two strips, one to either side, to help orient the rows.
In Enchanted Labyrinth: the Card Game (derived from EL the boardgame) some of the cards represent the "dungeon" being explored by wizards and their minions. As creatures move into new areas, the cards are turned face up to reveal the contents of the area.
In Zombie Escape, face-down cards represent the building (a reform school) that the players try to escape from in the face of zombie infestation. Once again, discovery occurs when players move onto the card areas.
Cards are a more acceptable means of introducing variation through chance
Many people now dislike dice rolling, if only as a reaction against the random "roll and move" mechanic so infamous in older American family games. People believe (and sometimes it's true) that they can manage cards in a way they cannot manage dice rolls.
Lew Pulsipher
Summary:
• Cards reduce the need to read rules before playing ("put the rules on the cards")
• Cards are a simple way to add color and visual interest to a game (as opposed to expensive 3D sculptured pieces)
• Cards provide a simple and clean way to add "chrome" to a game (chrome usually involves rule exceptions)
• Cards provide a convenient way to "inventory" game information
• Cards are an easy way to increase variety in a game (which usually increases replayability as well)
• Cards can be used as a substitute "board"
• Cards are a more acceptable means of introducing variation through chance, as many people now dislike dice rolling
Cards reduce the need to read rules before playing
"Put the rules on the cards". This is the easiest way to simplify the difficulties of learning a game, especially for those teaching it to others. A player only needs to consider/understand the card-rules when they hold or draw the card. Well-known collectible card game designers introduced me to the "seven line rule": players won't read more than seven lines of rules on a card, so don't put more on them. For millennials the rule is certainly "the less text the better".
This is also a good way to reduce the size of the rulebook. Big rulebooks are daunting even if the game itself is fairly simple.
Cards are a simple way to add color and visual interest to a game
Cards often host attractive color graphics, much larger than you can put on tiles or counters. They are cheaper than sculptured three dimensional figures. 3D figures are seldom multi-colored, too.
Cards provide a simple and clean way to add "chrome" to a game
"Chrome" is the term for special rules that often reflect special historical or personal circumstances. Hence chrome usually involves rule exceptions. And where "chrome" includes a visual, a card is the best way to illustrate/introduce it. This relates also to the first point, putting rules on the cards rather than in the rulebook.
If I designed Britannia today I might include cards to add "chrome" to the game. A variant using "Nation Specialty Cards" already exists (my design, not released).
Cards provide a convenient way to "inventory" information
When players need to keep track of what items or spells or capabilities they possess, cards are an excellent choice. They're familiar, easy to organize, and have both text and graphics. For example, spells are tracked in EL:the Card Game (see below) via cards.
Cards are an easy way to increase variety in a game
"Event cards" are quite common in games these days. Lots of different scnearios/situations can be introduced in a small deck of cards.
The variety of the cards usually increases replayability as well. More possibilities equals more paths that the game can follow. Players can play many times and still be able to say "I never saw that happen before".
Cards can be used as a substitute "board"
I've devised several prototype games that use cards in place of a board. From a commercial point of view, this results in a much less expensive package that is easier to ship and to find shelfspace for.
In Battle of Hastings some of the cards represent Saxon and Norman units; the play area is so crowded until late in the game that the cards can be arranged in a 7 by 6 array of "spaces", though I also have two strips, one to either side, to help orient the rows.
In Enchanted Labyrinth: the Card Game (derived from EL the boardgame) some of the cards represent the "dungeon" being explored by wizards and their minions. As creatures move into new areas, the cards are turned face up to reveal the contents of the area.
In Zombie Escape, face-down cards represent the building (a reform school) that the players try to escape from in the face of zombie infestation. Once again, discovery occurs when players move onto the card areas.
Cards are a more acceptable means of introducing variation through chance
Many people now dislike dice rolling, if only as a reaction against the random "roll and move" mechanic so infamous in older American family games. People believe (and sometimes it's true) that they can manage cards in a way they cannot manage dice rolls.
Lew Pulsipher
Thursday, November 8, 2007
The Game Industry Wants "Educated People"
Before you react, let me hasten to say that "educated" refers to an attitude, not to earned degrees. Fortunately for us, the game industry does not yet have the "degree-itis" that is invading all walks of American life, as though the only way you can learn something is to get a degree in it. The industry is a "meritocracy", where you are valued and hired for what you can do and what you can create. "Educated people" doesn't necessarily imply academic degrees, it implies a certain attitude toward life. It's that attitude that the game companies want and need to succeed. So I am not talking about the classic idea of the "well-educated" person, which relates to particular things like knowledge of the classics.
Nonetheless, if you read good advice about breaking into the game industry, that advice will include "read as much as you can" and "educate yourself as much as possible", even as the advisors suggest that a bachelor's degree is a good idea. For example, everyone interested in "breaking in" should read the wealth of advice on Tom Sloper's Web site (sloperama.com) and his monthly IGDA column. I used to use a book by Ernest Adams, Break into the Game Industry (http://ernestadams.com/), now a bit long in the tooth (2003) but still available from Amazon. His advice is well worth reading (especially about getting a job and how to keep a job), and amounts to the same as Tom's.
No, an "educated person" is a person with a certain attitude toward life, not necessarily one who has a degree. There are people with legitimate Ph.D.s who could be called uneducated (though this is very unlikely). There are certainly many people with bachelors degrees who are essentially uneducated. And there are 17 and 18 and 19 year-olds who clearly are educated people, though they haven't had the time to accumulate a wealth of experience and knowledge that is associated with being educated.
So what makes someone "educated"? An educated person wants to KNOW, and will make an effort to find out things. An uneducated person will tend not to bother. Here's a simple example. An educated person, confronted with a word he doesn't know, is likely to look it up. He wants to improve his understanding (of language, of the world). An uneducated person isn't going to bother.
Further, an educated person teaches himself or herself when necessary, from books or otherwise, rather than wait for a class. The uneducated ones will frequently whine "I haven't been sent to training for that".
Not surprisingly, educated people read a lot, and uneducated ones don't.
This year at my school we had a speaker who helped epitomize this attitude. Bruce Shankle worked at Red Storm at the time (now, Microsoft). He had no degree, but had six years of schooling. He'd taught himself a great deal to help himself as a programmer, and said (IIRC) that 20% of his work time was spent learning new things, and he also frequently studied at home. He paid his way to GDC even in years when he wasn't working in the game industry, because he wanted to know about the industry. This is clearly an educated man in the sense the industry wants, though he has no degree.
I am another example. As some readers know, I worked for many years in computer support at Womack Medical Center on Ft. Bragg, and before that as a dBase programmer. For much of that time I was chief of PCs and networking, supporting up to 900 PCs--and I was the first Webmaster there as well. Almost everything I knew that got me that job and helped me do that job I taught myself, because a Ph.D. in History doesn't help with computing, nor does any degree you got in 1981! I have never actually taken a "real college class" in computing, though I've taken many training classes through the years at Womack. Similarly, everything I've learned about games and game design I taught myself, of course. I read a lot, I experimented a lot, I made a lot of mistakes, too. It does not hurt to have a Ph.D., for sure, but that's not what moved me forward.
Good classes help you learn much quicker, as you take advantage of the experience of teachers and authors. I'd have had an easier time if I'd had classes to take, but such classes rarely existed in the early 80s. Bruce Shankle benefited from many classes, though he had no degree.
Now how does this contrast with typical K12 "education"? There are many exceptions, but generally students in K12 are trained, not educated. The teachers' success, their very job, depends on the students' performance on end-of-class tests in many cases. So the teachers, naturally, try to get students to memorize all the material that is on those wretched tests. The students are trained to parrot material, not to think.
Even good students learn that they can get by just fine by doing exactly what they're assigned and no more. They'll ask what the minimum is, and that's all they'll do. Worse, they think if they do the minimum they should get an "A", though no one in the real world wants an employee who thinks that way.
The habit of students to ask for a test "review"--which usually means, they want to be told exactly what will be on the test--is a consequence. In K12 they're told exactly what's on the test, then they regurgitate it on the test, and fools call this "education". I call it memorization, the same kind of thing that blights computer certification. This year I have told students there will be no review, because it's up to them to decide what's important. That's the way the real world is--there's no review from the great Cosmic All, just as we can say that life is an essay test, not a multiple choice test. Of course, many of the smarter students pay attention to what I say in class each day (a few even write it down), and figure that's what I think is important. How sensible, and yet rare!
This year I assigned students the "task"--though it's a habit they should get into on their own--of maintaining a notebook or other "data store" in which they record game-related ideas as they get them. The "uneducated" attitude surfaced soon after: "how much do I have to include in this?" The student wanted to know the minimum, rather than take the educated attitude that this was something he should do anyway, that was worth doing, and he should put some time into it. (That student has since dropped out, unsurprisingly.)
We show videos of guest speakers, such as Bruce Shankle. Students often don't pay attention, or half pay attention, fooling themselves into thinking they can remember a lot while doing something else, when in fact memories seem to be really poor nowadays. Once again, instead of making the most of listening to an expert talk about what happens in the real world, they try to "just get by". An "old man" (me) probably listens better the fourth time he hears one of these lectures, than many of the students do the first time.
Similarly, I see the uneducated attitude toward this blog. It doesn't appear to immediately affect the student, it's not obviously part of a specific task, so many tend to ignore it, even though I ask questions on tests to see if the students actually read it. The same happens when I ask the students to read a web site. Worst of all, though perhaps more understandable, most students don't bother to read the textbook, though there's a wealth of good advice in it despite its flaws.
Educated people like to use their brains in top gear; uneducated people prefer to run in "idle". The old-fashioned "thirst for knowledge" is what I'm talking about, in a sense; something I still see in older students, but rarely in younger ones.
After I talked about this in a class, one student observed that his generation has been told that the only way to learn is to take a class. I've taught graduate school for 20 years at night, and something like 17,000 classroom hours in my life, and I KNOW that people can get through classes and get degrees and still not know a whole lot about what's important in the topics they've studied.
What's important is what you know and what you can do, not what classes you took or what degrees you have. Unfortunately, this attitude is being squeezed out of American life . Thank heaven the game industry still sees it this way.
Nonetheless, if you read good advice about breaking into the game industry, that advice will include "read as much as you can" and "educate yourself as much as possible", even as the advisors suggest that a bachelor's degree is a good idea. For example, everyone interested in "breaking in" should read the wealth of advice on Tom Sloper's Web site (sloperama.com) and his monthly IGDA column. I used to use a book by Ernest Adams, Break into the Game Industry (http://ernestadams.com/), now a bit long in the tooth (2003) but still available from Amazon. His advice is well worth reading (especially about getting a job and how to keep a job), and amounts to the same as Tom's.
No, an "educated person" is a person with a certain attitude toward life, not necessarily one who has a degree. There are people with legitimate Ph.D.s who could be called uneducated (though this is very unlikely). There are certainly many people with bachelors degrees who are essentially uneducated. And there are 17 and 18 and 19 year-olds who clearly are educated people, though they haven't had the time to accumulate a wealth of experience and knowledge that is associated with being educated.
So what makes someone "educated"? An educated person wants to KNOW, and will make an effort to find out things. An uneducated person will tend not to bother. Here's a simple example. An educated person, confronted with a word he doesn't know, is likely to look it up. He wants to improve his understanding (of language, of the world). An uneducated person isn't going to bother.
Further, an educated person teaches himself or herself when necessary, from books or otherwise, rather than wait for a class. The uneducated ones will frequently whine "I haven't been sent to training for that".
Not surprisingly, educated people read a lot, and uneducated ones don't.
This year at my school we had a speaker who helped epitomize this attitude. Bruce Shankle worked at Red Storm at the time (now, Microsoft). He had no degree, but had six years of schooling. He'd taught himself a great deal to help himself as a programmer, and said (IIRC) that 20% of his work time was spent learning new things, and he also frequently studied at home. He paid his way to GDC even in years when he wasn't working in the game industry, because he wanted to know about the industry. This is clearly an educated man in the sense the industry wants, though he has no degree.
I am another example. As some readers know, I worked for many years in computer support at Womack Medical Center on Ft. Bragg, and before that as a dBase programmer. For much of that time I was chief of PCs and networking, supporting up to 900 PCs--and I was the first Webmaster there as well. Almost everything I knew that got me that job and helped me do that job I taught myself, because a Ph.D. in History doesn't help with computing, nor does any degree you got in 1981! I have never actually taken a "real college class" in computing, though I've taken many training classes through the years at Womack. Similarly, everything I've learned about games and game design I taught myself, of course. I read a lot, I experimented a lot, I made a lot of mistakes, too. It does not hurt to have a Ph.D., for sure, but that's not what moved me forward.
Good classes help you learn much quicker, as you take advantage of the experience of teachers and authors. I'd have had an easier time if I'd had classes to take, but such classes rarely existed in the early 80s. Bruce Shankle benefited from many classes, though he had no degree.
Now how does this contrast with typical K12 "education"? There are many exceptions, but generally students in K12 are trained, not educated. The teachers' success, their very job, depends on the students' performance on end-of-class tests in many cases. So the teachers, naturally, try to get students to memorize all the material that is on those wretched tests. The students are trained to parrot material, not to think.
Even good students learn that they can get by just fine by doing exactly what they're assigned and no more. They'll ask what the minimum is, and that's all they'll do. Worse, they think if they do the minimum they should get an "A", though no one in the real world wants an employee who thinks that way.
The habit of students to ask for a test "review"--which usually means, they want to be told exactly what will be on the test--is a consequence. In K12 they're told exactly what's on the test, then they regurgitate it on the test, and fools call this "education". I call it memorization, the same kind of thing that blights computer certification. This year I have told students there will be no review, because it's up to them to decide what's important. That's the way the real world is--there's no review from the great Cosmic All, just as we can say that life is an essay test, not a multiple choice test. Of course, many of the smarter students pay attention to what I say in class each day (a few even write it down), and figure that's what I think is important. How sensible, and yet rare!
This year I assigned students the "task"--though it's a habit they should get into on their own--of maintaining a notebook or other "data store" in which they record game-related ideas as they get them. The "uneducated" attitude surfaced soon after: "how much do I have to include in this?" The student wanted to know the minimum, rather than take the educated attitude that this was something he should do anyway, that was worth doing, and he should put some time into it. (That student has since dropped out, unsurprisingly.)
We show videos of guest speakers, such as Bruce Shankle. Students often don't pay attention, or half pay attention, fooling themselves into thinking they can remember a lot while doing something else, when in fact memories seem to be really poor nowadays. Once again, instead of making the most of listening to an expert talk about what happens in the real world, they try to "just get by". An "old man" (me) probably listens better the fourth time he hears one of these lectures, than many of the students do the first time.
Similarly, I see the uneducated attitude toward this blog. It doesn't appear to immediately affect the student, it's not obviously part of a specific task, so many tend to ignore it, even though I ask questions on tests to see if the students actually read it. The same happens when I ask the students to read a web site. Worst of all, though perhaps more understandable, most students don't bother to read the textbook, though there's a wealth of good advice in it despite its flaws.
Educated people like to use their brains in top gear; uneducated people prefer to run in "idle". The old-fashioned "thirst for knowledge" is what I'm talking about, in a sense; something I still see in older students, but rarely in younger ones.
After I talked about this in a class, one student observed that his generation has been told that the only way to learn is to take a class. I've taught graduate school for 20 years at night, and something like 17,000 classroom hours in my life, and I KNOW that people can get through classes and get degrees and still not know a whole lot about what's important in the topics they've studied.
What's important is what you know and what you can do, not what classes you took or what degrees you have. Unfortunately, this attitude is being squeezed out of American life . Thank heaven the game industry still sees it this way.
Thursday, November 1, 2007
Analogy: Level Design
I'm trying to describe to my students what level designers do. The first thing to say, of course, is "it depends"--depends on what the company expects the level designer to do, and what is "farmed out" to someone else.
Nonetheless, the analogy to a Dungeons and Dragons or other paper RPG referee is good, though a great many younger people don't seem to be familiar with (non-video) D&D these days.
Fundamentally, level design is a limited, concentrated form of game design. The core mechanics of the game are already determined. The level designer is using them to create an episode that will have good gameplay, that will entertain in various ways. Gameplay always involves challenges and actions to meet those challenges, of course. This also involves goals, ways to achieve the goals, paths (such as corridors and rooms), appearances, and behavior of NPC's and opposition (scripts to do better than the game AI can do on its own).
In D&D the "DM" or DungeonMaster starts with the "core mechanics" of D&D and fleshes out adventures. The adventure, analagous to a video game level, usually involves a goal of some kind, if only to "wipe out the badguys". The DM may have particular methods in mind whereby the players can achieve the goal, or he may simply set up a situation and trust the players to creatively find ways to achieve the goal. In level design, playtesting will show whether creativity can prevail; in home-made D&D adventures there is no playtesting, so the DM must be more careful. But D&D adventures that are published are certainly playtested.
A published D&D adventure includes all--well, most--of the information a referee needs to run the adventure. The video game level includes everything needed for the player(s) to play the adventure--er, level.
So the level designer must specify and perhaps place (though probably not make) the graphics, map out all the paths and alternatives the player(s) can pursue, place the opposition (monsters or otherwise), script the conversations, specify the goal and how player(s) find out what that goal is, specify exceptions to the normal core mechanics, and all the other things that are required for the "adventure" episode.
Nonetheless, the analogy to a Dungeons and Dragons or other paper RPG referee is good, though a great many younger people don't seem to be familiar with (non-video) D&D these days.
Fundamentally, level design is a limited, concentrated form of game design. The core mechanics of the game are already determined. The level designer is using them to create an episode that will have good gameplay, that will entertain in various ways. Gameplay always involves challenges and actions to meet those challenges, of course. This also involves goals, ways to achieve the goals, paths (such as corridors and rooms), appearances, and behavior of NPC's and opposition (scripts to do better than the game AI can do on its own).
In D&D the "DM" or DungeonMaster starts with the "core mechanics" of D&D and fleshes out adventures. The adventure, analagous to a video game level, usually involves a goal of some kind, if only to "wipe out the badguys". The DM may have particular methods in mind whereby the players can achieve the goal, or he may simply set up a situation and trust the players to creatively find ways to achieve the goal. In level design, playtesting will show whether creativity can prevail; in home-made D&D adventures there is no playtesting, so the DM must be more careful. But D&D adventures that are published are certainly playtested.
A published D&D adventure includes all--well, most--of the information a referee needs to run the adventure. The video game level includes everything needed for the player(s) to play the adventure--er, level.
So the level designer must specify and perhaps place (though probably not make) the graphics, map out all the paths and alternatives the player(s) can pursue, place the opposition (monsters or otherwise), script the conversations, specify the goal and how player(s) find out what that goal is, specify exceptions to the normal core mechanics, and all the other things that are required for the "adventure" episode.
Wednesday, October 31, 2007
"Fame"
Bruce Shankle, when he spoke at my school in January '07 about breaking into the game industry, pointed out that even the most famous names are often not recognized by experienced gamers. How many know who Carmack and Romero, or Will Wright are? But mention their works (Doom and Quake, The Sims) and they're recognized. Sid Meier is recognized primarily because his name is part of some game names.
My students generally don't have a clue that I am a little bit famous--after all, I "don't do electronic games". I was amused one day at the game club when a student said he'd talked to a friend in Florida and told him his instructor had designed Britannia. The friend got excited and said something like "oh, that's my favorite game, he's famous". No. Hardly anyone in the game industry is famous. But many people worldwide recognize my name (which is fortunately nearly unique), or the game Britannia, or what I did with D&D and Diplomacy variants ages ago. Measured by that number of people, I'm likely the most famous (or better, least unfamous) person the students know, but they don't think in those terms.
I'm going to edit in a more recent story: Another student (Frank) told me that in his history class is an older man, who obviously knows the history instructors well, maybe working on another degree? He and some other folks have been working on an historical game for 10 years and are about to publish. Frank said, do you know the game Britannia? Yes. The designer of Britannia is teaching game design at Wake, you ought to talk to him, Frank says. The gent got quite excited. Though I haven't heard from him yet!
The "me" generation generally isn't impressed by anyone but themselves, of course. In a high school class I taught last year, about one quarter of the 20-some students thought they would be famous at some time. This is evidently a common "delusion" amongst the younger generation. (I say delusion because, barring extreme chance, you'd not have even one "famous" person come out of a group that small.)
So what can a teacher do about this? Nothing that I can think of. Sometimes only time/experience lets people shed illusions.
My students generally don't have a clue that I am a little bit famous--after all, I "don't do electronic games". I was amused one day at the game club when a student said he'd talked to a friend in Florida and told him his instructor had designed Britannia. The friend got excited and said something like "oh, that's my favorite game, he's famous". No. Hardly anyone in the game industry is famous. But many people worldwide recognize my name (which is fortunately nearly unique), or the game Britannia, or what I did with D&D and Diplomacy variants ages ago. Measured by that number of people, I'm likely the most famous (or better, least unfamous) person the students know, but they don't think in those terms.
I'm going to edit in a more recent story: Another student (Frank) told me that in his history class is an older man, who obviously knows the history instructors well, maybe working on another degree? He and some other folks have been working on an historical game for 10 years and are about to publish. Frank said, do you know the game Britannia? Yes. The designer of Britannia is teaching game design at Wake, you ought to talk to him, Frank says. The gent got quite excited. Though I haven't heard from him yet!
The "me" generation generally isn't impressed by anyone but themselves, of course. In a high school class I taught last year, about one quarter of the 20-some students thought they would be famous at some time. This is evidently a common "delusion" amongst the younger generation. (I say delusion because, barring extreme chance, you'd not have even one "famous" person come out of a group that small.)
So what can a teacher do about this? Nothing that I can think of. Sometimes only time/experience lets people shed illusions.
Number of people working on an "AAA list" game
I found this note highlighting the massive size of modern video games:
More than 70 people, including 20 programmers and 30 artists, worked on Madden NFL '07. Similarly, Maxime Beland, creative director at Ubisoft's Montreal studio, says that 150 people worked with him to create Rainbow Six Vegas.
More than 70 people, including 20 programmers and 30 artists, worked on Madden NFL '07. Similarly, Maxime Beland, creative director at Ubisoft's Montreal studio, says that 150 people worked with him to create Rainbow Six Vegas.
What students can do outside of class
In classes we made a list of what students can do outside of class to help prepare/qualify themselves to apply for a job in the video game industry. There is no particular priority here, for the most part.
IGDA free membership
Business card!
Use Printmaster or word processor, buy business card stock at office supply
Go to IGDA meetings
“Go to GDC”
Go to DGXpo
Go to Goldsboro cconvention (One-day)
Web sites to visit/monitor:
Gamasutra
Slashdot.org
Gamespot
Mobygames.com
IGN
Sloperama
Engadget
Joystiq
Boardgamegeek.com (for ideas)
Bgdf.com (board game designers’ forum)
And many others
Download and try engines (for programmers and designers), XNA, Torque, RPGMaker, Source, etc.
Application software:
Maya Personal Edition 8.5 (2008)
30 day 3D Studio Max
30 day Photoshop CS3
Blender
Gimp
MS Expression Graphic Designer
Resume
Your Web site
Geocities free
Freehostia free
Lunarpages <$100 / year (my host and package for pulsiphergames.com)
Portfolios (on the Web, and paper/CD/DVD)
Modding/make scenarios
Be a guest speaker (libraries, schools)
Write things (Gamasutra, many other Web sites, Game Developer Magazine)
Remember that many businesses do drug testing of prospective employees
Marwood Ellis came across some interesting networking advice about the Game Developers Conference (and possibly other events) that could be useful to students.
http://www.igda.org/articles/mmencher_networking05.php
Another article written
http://www.igda.org/articles/mmencher_networking06.php
a list of other articles written:
http://www.igda.org/Forums/showthread.php?s=9c6f2892ee6d8d60253a0e792da87a80&threadid=20528
IGDA free membership
Business card!
Use Printmaster or word processor, buy business card stock at office supply
Go to IGDA meetings
“Go to GDC”
Go to DGXpo
Go to Goldsboro cconvention (One-day)
Web sites to visit/monitor:
Gamasutra
Slashdot.org
Gamespot
Mobygames.com
IGN
Sloperama
Engadget
Joystiq
Boardgamegeek.com (for ideas)
Bgdf.com (board game designers’ forum)
And many others
Download and try engines (for programmers and designers), XNA, Torque, RPGMaker, Source, etc.
Application software:
Maya Personal Edition 8.5 (2008)
30 day 3D Studio Max
30 day Photoshop CS3
Blender
Gimp
MS Expression Graphic Designer
Resume
Your Web site
Geocities free
Freehostia free
Lunarpages <$100 / year (my host and package for pulsiphergames.com)
Portfolios (on the Web, and paper/CD/DVD)
Modding/make scenarios
Be a guest speaker (libraries, schools)
Write things (Gamasutra, many other Web sites, Game Developer Magazine)
Remember that many businesses do drug testing of prospective employees
Marwood Ellis came across some interesting networking advice about the Game Developers Conference (and possibly other events) that could be useful to students.
http://www.igda.org/articles/mmencher_networking05.php
Another article written
http://www.igda.org/articles/mmencher_networking06.php
a list of other articles written:
http://www.igda.org/Forums/showthread.php?s=9c6f2892ee6d8d60253a0e792da87a80&threadid=20528
Friday, October 26, 2007
Slides about size of modern electronic games
As a follow-up to posts about the sheer size of modern electronic games, I made a set of slides linked here.
Thursday, October 25, 2007
Playtesting is "Sovereign"
I've been known to say about game design that "Prototypes are sovereign", that you haven't really designed a game until you have a playable prototype. That's because, until the game is played, you just cannot really know what you've got. But I would be just as right to say "playtesting is sovereign".
When you design a game, you try to see in your "mind's eye" how the game is going to work, but until you play it, you simply cannot know what is going to work and what is not. The first few times you play, many things will change (provided, of course, that you're willing to make changes, which is a major requirement of a game designer).
Granted more experienced designers can foresee weaknesses and eliminate them before reaching the prototype stage. But we're interested here in teaching game design, so this is addressed to inexperienced designers.
Let's clarify something right now. I am talking about playtesting to improve gameplay, not testing to squash programming bugs. The latter is what is often meant by "testing" when people talk about electronic games, and this testing takes place late in the development cycle, when the gameplay and appearance are set in stone (because it's too late to make major changes). This bug testing ("Quality Assurance") is aimed at making sure the game works the way it is supposed to, not at whether the way it's supposed to work is good or not. "Bug testing" essentially does not exist in non-electronic games, although it is important (and often forgotten) to test the production version of a game, as converting the prototype into the published version can introduce its own set of problems. (For example, the boxes on Population Track on the FFG Britannia board are really too small for the purpose; this new version of the board evidently was not actually tested.)
So: here I'm talking about playtesting the gameplay and assorted details (such as user interface) that strongly affect gameplay.
There are three stages to playtesting: solo playtesting (also called "alpha"), local playtesting ("beta"), and "blind" playtesting (also part of the "beta" stage). (In electronic games, often the in-house testing is all called "alpha", and outside testing is called "beta".)
Few non-video games are meant to be played alone. Yet in solo playtesting, the designer plays the game solitaire, playing all the sides independently as best he can. At this stage the designer is trying to get the game to a state where other playtesters have a good likelihood of enjoying it, and of playing it through to the end. At solo stage the designer might try a portion of the game and then stop because something isn't working, or because he has a better idea. When asking other people to play a game I would never stop a game in the middle, or try something that might be so bad I'd want to stop, though I know of designers who think nothing of doing this.
Most video games can be played alone, and if there's a more-than-one-player component, it's usually impossible for the designer to play several sides by himself.
At the local playtesting stage, people are asked to play the game through, usually in the presence of the designer when it is a non-electronic game. Almost always, at the beginning of this beta testing I do not have a full set of rules, I just have notes about how to play, and some of the details are in my head. (This is a big reason why it is much quicker to design a non-electronic game. With an electronic game all the "rules" must be settled precisely before the programming of the prototype can be completed. The programming is the equivalent of the rules of the non-electronic game.) As local playtesting goes on, I make a rough set of rules, then finally write a full set of rules.
As the local playtests occur, I write down notes about what I see and hear, and especially about answers to questions that need to be incorporated into the full rules. By the time I have a full set of rules, I usually refer to the rules for detailed questions, to see if the rules cover that question and whether it is easy to find that information.
The third stage is "blind" testing, where someone is given the game and must play it without any intervention from the designer. This is a test of the rules, somewhat akin to "bug testing". Are the rules clear enough that people can play the game from the rules? What questions do the blind testers come up with, and how can the rules be improved as a result? Unfortunately, nowadays people are often poor rules-readers, so I advocate electronic tutorials to help people learn how to play a game.
I know from experience with published games, especially Britannia, that there will ALWAYS be people who misread rules, sometimes willfully. 99% clarity of detail is about the best you can get using the English language.
In a sense, electronic games can jump to "blind" testing quickly, because by their very nature these games hide the rules from the players, enforcing them through the programming. This is an advantage of electronic games over non-electronic, that no one needs to read and understand a set of rules.
Game design, when taken to completion, is highly interactive. Playtesting sets good games apart from bad, and playtesting is (or should be) interactive. In a separate post I list some of the things you must look for while doing beta testing.
There is no doubt that the last 20% of refinement of a game takes 80% of the designer's time. Playtesting is time-consuming, tweaking rules is time-consuming. In the non-electronic world, often a "developer", another person, does much of this testing and tweaking. I personally strongly prefer to do this myself, even though it is much less fun than creating new games, because I don't want someone else "screwing up" my game. (See http://www.pulsipher.net/gamedesign/developers.htm for some of my experiences.)
Even when you don't intend to change the rules, rewriting them introduces unintended consequences (as evidenced by the Britannia Second Edition rules rewrite by FFG--and apparently having no testing of the new version of the rules compounded the problem). When you rewrite to change a rule, the repercussions are often larger. So a remarkable amount of testing is needed.
In the electronic world it is difficult to quickly and cheaply make big changes in a prototype. This is one of the problems that all makers of electronic games face, and a major reason why some electronic games are not very good. By the time the development studio has a playable prototype, it is too late in the schedule to make the changes that playtesting reveals are necessary.
At some point during playtesting of a game, the designer must decide if "there's something in it" (as I put it): if the game is really good enough that people might play it, like it, and would buy the finished version of it. There's really two times when this should happen, once during solo playtests (alpha testing), the second time during playtesting by others (beta testing). The "something in it" point in solo playtesting is an indicator that it's about ready for others to play. The "something in it" point in beta testing comes when observing people playing the game and their reactions during and after playing.
Usually I need to tweak a game quite a bit from its state at the end of solo play, before I can reach the "something in it" stage of beta testing. Sometimes there doesn't seem to be anything in it during beta testing, and I set it aside for further thought. Sometimes I realize, from solo playing, that there isn't "something in it", at least not yet, so I set it aside at that point.
I strongly suspect that novice designers rarely understand these stages. Their egos become involved, and they assume that because they took the time to make the game, and it's their idea, there must be something in it. In extreme cases, the "designer" thinks he has "something in it" when all he has is an idea, that is, when he has virtually nothing at all. The number of people who think they've successfully designed a game, yet haven't playtested it at all, is remarkable. Playtesting is the meat of successful design, not the end. (I confess that I don't think of "development" as a process separate from design.)
So how do you recognize when there's "something in" a game? That's hard to say, unfortunately. Surveys or written feedback won't necessarily reveal it. In alpha testing, the "something in it" stage is a gradual realization, coming from observing my own thought processes as I play. My games are, almost without exception, strategy games. When I "see" myself thinking hard about the strategies, and liking the options, then I may think there's something in it.
In my case, in beta testing when spontaneously (without any urging) people say "I'd buy this game", I know I've got something. However, this is rare, and I don't remember anyone ever saying that about Britannia, or Dragon Rage, or Valley of the Four Winds, but they have all been quite popular. Perhaps better, if people want to play the game again, in this day of the "cult of the new" when hardly anyone plays a game twice in the same session, there may be "something in it".
I am very low-key in beta playtesting, preferring to watch reactions of people rather than try to solicit opinions, in part because people (being polite for the most part) won't say negative things even when asked. I also try not to play, as 1) the designer playing in a game tends to skew results and 2) when I play, I do a worse job of playing, and a worse job of evaluating the playtesting, than if I did either alone. As I'm that strange sort of person who enjoys watching games as much as playing, why play?
I do not "inflict" a game on players until I think it is good enough to be OK to play, that is, I've reached that first "something in it" stage. Evidently some other designers playtest with other people very early: not me. My playtesters play games to have fun, not as on obligation, and most are not hard-core boardgamers, so I do what I can to make sure the game MIGHT be fun before I ask them to play.
As I said, playtesters tend to be polite. It's hard to find out what they really think. I am skeptical that a feedback sheet will make a difference. Rather,
I sometimes try the "Six Hats" method (devised by Edward de Bono) when playtesting; specifically I'll ask players successively to put on their black hat (the judge), then the red hat (intuition and emotion) to see how they assess a game, and then the yellow hat (the positive side of assessing an idea) to see what they like about a game. With local playtesters I sometimes ask them to think of ways to make the game better (the green hat). Google "de Bono" or "Six Hats" for more information.
Also see the following article on Gamastutra: http://www.gamasutra.com/features/20050913/sigman_01.shtml
This includes tips on constructing prototypes.
When you design a game, you try to see in your "mind's eye" how the game is going to work, but until you play it, you simply cannot know what is going to work and what is not. The first few times you play, many things will change (provided, of course, that you're willing to make changes, which is a major requirement of a game designer).
Granted more experienced designers can foresee weaknesses and eliminate them before reaching the prototype stage. But we're interested here in teaching game design, so this is addressed to inexperienced designers.
Let's clarify something right now. I am talking about playtesting to improve gameplay, not testing to squash programming bugs. The latter is what is often meant by "testing" when people talk about electronic games, and this testing takes place late in the development cycle, when the gameplay and appearance are set in stone (because it's too late to make major changes). This bug testing ("Quality Assurance") is aimed at making sure the game works the way it is supposed to, not at whether the way it's supposed to work is good or not. "Bug testing" essentially does not exist in non-electronic games, although it is important (and often forgotten) to test the production version of a game, as converting the prototype into the published version can introduce its own set of problems. (For example, the boxes on Population Track on the FFG Britannia board are really too small for the purpose; this new version of the board evidently was not actually tested.)
So: here I'm talking about playtesting the gameplay and assorted details (such as user interface) that strongly affect gameplay.
There are three stages to playtesting: solo playtesting (also called "alpha"), local playtesting ("beta"), and "blind" playtesting (also part of the "beta" stage). (In electronic games, often the in-house testing is all called "alpha", and outside testing is called "beta".)
Few non-video games are meant to be played alone. Yet in solo playtesting, the designer plays the game solitaire, playing all the sides independently as best he can. At this stage the designer is trying to get the game to a state where other playtesters have a good likelihood of enjoying it, and of playing it through to the end. At solo stage the designer might try a portion of the game and then stop because something isn't working, or because he has a better idea. When asking other people to play a game I would never stop a game in the middle, or try something that might be so bad I'd want to stop, though I know of designers who think nothing of doing this.
Most video games can be played alone, and if there's a more-than-one-player component, it's usually impossible for the designer to play several sides by himself.
At the local playtesting stage, people are asked to play the game through, usually in the presence of the designer when it is a non-electronic game. Almost always, at the beginning of this beta testing I do not have a full set of rules, I just have notes about how to play, and some of the details are in my head. (This is a big reason why it is much quicker to design a non-electronic game. With an electronic game all the "rules" must be settled precisely before the programming of the prototype can be completed. The programming is the equivalent of the rules of the non-electronic game.) As local playtesting goes on, I make a rough set of rules, then finally write a full set of rules.
As the local playtests occur, I write down notes about what I see and hear, and especially about answers to questions that need to be incorporated into the full rules. By the time I have a full set of rules, I usually refer to the rules for detailed questions, to see if the rules cover that question and whether it is easy to find that information.
The third stage is "blind" testing, where someone is given the game and must play it without any intervention from the designer. This is a test of the rules, somewhat akin to "bug testing". Are the rules clear enough that people can play the game from the rules? What questions do the blind testers come up with, and how can the rules be improved as a result? Unfortunately, nowadays people are often poor rules-readers, so I advocate electronic tutorials to help people learn how to play a game.
I know from experience with published games, especially Britannia, that there will ALWAYS be people who misread rules, sometimes willfully. 99% clarity of detail is about the best you can get using the English language.
In a sense, electronic games can jump to "blind" testing quickly, because by their very nature these games hide the rules from the players, enforcing them through the programming. This is an advantage of electronic games over non-electronic, that no one needs to read and understand a set of rules.
Game design, when taken to completion, is highly interactive. Playtesting sets good games apart from bad, and playtesting is (or should be) interactive. In a separate post I list some of the things you must look for while doing beta testing.
There is no doubt that the last 20% of refinement of a game takes 80% of the designer's time. Playtesting is time-consuming, tweaking rules is time-consuming. In the non-electronic world, often a "developer", another person, does much of this testing and tweaking. I personally strongly prefer to do this myself, even though it is much less fun than creating new games, because I don't want someone else "screwing up" my game. (See http://www.pulsipher.net/gamedesign/developers.htm for some of my experiences.)
Even when you don't intend to change the rules, rewriting them introduces unintended consequences (as evidenced by the Britannia Second Edition rules rewrite by FFG--and apparently having no testing of the new version of the rules compounded the problem). When you rewrite to change a rule, the repercussions are often larger. So a remarkable amount of testing is needed.
In the electronic world it is difficult to quickly and cheaply make big changes in a prototype. This is one of the problems that all makers of electronic games face, and a major reason why some electronic games are not very good. By the time the development studio has a playable prototype, it is too late in the schedule to make the changes that playtesting reveals are necessary.
At some point during playtesting of a game, the designer must decide if "there's something in it" (as I put it): if the game is really good enough that people might play it, like it, and would buy the finished version of it. There's really two times when this should happen, once during solo playtests (alpha testing), the second time during playtesting by others (beta testing). The "something in it" point in solo playtesting is an indicator that it's about ready for others to play. The "something in it" point in beta testing comes when observing people playing the game and their reactions during and after playing.
Usually I need to tweak a game quite a bit from its state at the end of solo play, before I can reach the "something in it" stage of beta testing. Sometimes there doesn't seem to be anything in it during beta testing, and I set it aside for further thought. Sometimes I realize, from solo playing, that there isn't "something in it", at least not yet, so I set it aside at that point.
I strongly suspect that novice designers rarely understand these stages. Their egos become involved, and they assume that because they took the time to make the game, and it's their idea, there must be something in it. In extreme cases, the "designer" thinks he has "something in it" when all he has is an idea, that is, when he has virtually nothing at all. The number of people who think they've successfully designed a game, yet haven't playtested it at all, is remarkable. Playtesting is the meat of successful design, not the end. (I confess that I don't think of "development" as a process separate from design.)
So how do you recognize when there's "something in" a game? That's hard to say, unfortunately. Surveys or written feedback won't necessarily reveal it. In alpha testing, the "something in it" stage is a gradual realization, coming from observing my own thought processes as I play. My games are, almost without exception, strategy games. When I "see" myself thinking hard about the strategies, and liking the options, then I may think there's something in it.
In my case, in beta testing when spontaneously (without any urging) people say "I'd buy this game", I know I've got something. However, this is rare, and I don't remember anyone ever saying that about Britannia, or Dragon Rage, or Valley of the Four Winds, but they have all been quite popular. Perhaps better, if people want to play the game again, in this day of the "cult of the new" when hardly anyone plays a game twice in the same session, there may be "something in it".
I am very low-key in beta playtesting, preferring to watch reactions of people rather than try to solicit opinions, in part because people (being polite for the most part) won't say negative things even when asked. I also try not to play, as 1) the designer playing in a game tends to skew results and 2) when I play, I do a worse job of playing, and a worse job of evaluating the playtesting, than if I did either alone. As I'm that strange sort of person who enjoys watching games as much as playing, why play?
I do not "inflict" a game on players until I think it is good enough to be OK to play, that is, I've reached that first "something in it" stage. Evidently some other designers playtest with other people very early: not me. My playtesters play games to have fun, not as on obligation, and most are not hard-core boardgamers, so I do what I can to make sure the game MIGHT be fun before I ask them to play.
As I said, playtesters tend to be polite. It's hard to find out what they really think. I am skeptical that a feedback sheet will make a difference. Rather,
I sometimes try the "Six Hats" method (devised by Edward de Bono) when playtesting; specifically I'll ask players successively to put on their black hat (the judge), then the red hat (intuition and emotion) to see how they assess a game, and then the yellow hat (the positive side of assessing an idea) to see what they like about a game. With local playtesters I sometimes ask them to think of ways to make the game better (the green hat). Google "de Bono" or "Six Hats" for more information.
Also see the following article on Gamastutra: http://www.gamasutra.com/features/20050913/sigman_01.shtml
This includes tips on constructing prototypes.
Tuesday, October 23, 2007
Things to watch for when playtesting
Length. A game is always longer to new players, of course. But if it takes too long for new players, will they play again? Length is of course quite dependent on how much players enjoy what is happening in the game. The boardgame Civilization can take 8 to 12 hours, but those who love the game don't find that time weighs upon them.
Down time. Downtime is the time people must wait while someone else is taking a turn. This can be a problem even in a turn-based electronic game. Do people get bored waiting for their turn?
Is the game balanced. Even if the game is symmetric (all players start with identical situations), is there an advantage to playing first (or last). Chess is symmetric except for who moves first, but move-first is a big advantage.
Dominant Strategy. Look for any dominant strategy ("saddle point"). This is a strategy that is so good that a player who wants to win must pursue it; or a strategy so good that some will pursue it, yet that strategy renders the game less than entertaining. For example, in a Euro-style 4X game I've designed, one player found that by getting together a sufficiently large force, along with certain technology research, he could completely dominate other players who weren't pursuing the identical strategy. I want the game to offer a variety of ways to success, so I had to change the rules fairly extensively. This is why it is very important to have testers who are dynamite game players, so that they'll find these strategies during testing, rather than have someone find it after the game is published. I'm luck that I have one such player, and that I can be such a player myself when I put my mind to it.
Analysis paralysis. Are there too many things to watch for or keep track of, or too many choices, so players either freeze up or give up on figuring out what is the best thing to do? There are always "deliberate" (slow) players, the question is, is everyone slow or frustrated?
Rules difficult to grasp. What do the players find hard to grasp. (In my prototype Age of Exploration, players had trouble grasping the difference between movement of units and placement of units. I used the same distinction in an abstract stones-and-hexes prototype, and no one has a problem. Even if, after playing, players "get it", it might be necessary to change something. (In AoE I changed the rules extensively to recast/eliminate the distinction.)
What do players tend to forget? This isn't quite the same thing as what's difficult to grasp. Some rules just don't stick in people's minds. Is there anything you can do about it? Is there some play aid to help people remember?
What do players not bother to use? Some rules exist but no one uses them. If the threat of using them is not making a difference in the game, then perhaps you should eliminate the option. For example, in my hex-and-stones game Law and Chaos I originally allowed people to move a piece rather than place one. This happened rarely, as it was usually better to place another piece and increase the number on the board. So I eliminated the possibility, except as an "optional rule".
This was done in some haste, and I imagine I'll think of more.
Here are some items added from comments on boardgamegeek.
To see the discussion on boardgamegeek go to http://www.boardgamegeek.com/article/1810302#1810302
Adequate control. Do the players feel that they can exert a measure of control over what happens in the game? Remember, any (strategic) game is a series of challenges and actions in response to those challenges. (Harmony)
Horns of a Dilemma. On the other hand, are there enough plausible decisions in a play to make the players think, but not so many that "analysis paralysis" sets in. Even in a simple game, if a player can do only two of five possible actions in a turn, is there tension here or are the plays obvious? As one commenter put it, do the players sometimes feel "so much to do, so few actions"?
Player interaction. Do the players have to take the plays of other players into account? Yes, some games are virtually multi-player solitaire, and some players are happy with this. But most players want to be able to affect other players with their moves.
Taking it to the Max. Can extreme behavior within the rules break the game? Sure, if someone pursues a bad strategy, they'll lose. The question is, is there some extreme strategy that results in an unfair game?
Components and Play Aids. Do the physical parts of the game help play flow smoothly, or does something need to be changed? Is there too much record-keeping? How can it all be simplified?
Stages of play. You probably learn this in alpha/solo testing, if you do solo testing (which I strongly recommend). Are there identifiable stages in the game, especially ones where the typical run of play changes? E.g., in chess there is the early, middle, and end games. Pieces are deployed in the opening, mix it up in the midgame, and so forth. An exploration game has the expansion period followed by consolidation and then (usually) conflict. Etc.
Player interest/"fun". What part(s) of the game seem to be most interesting to the players? I'm not in favor of trying to figure out "fun", because fun comes from the people who are playing more than from the game design itself. And there are many games that I wouldn't call "fun" (including Britannia) that are nonetheless interesting and even fascinating.
Finally, remember Antoine de Saint-Exup'ery's maxim: “A designer knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away.”
Down time. Downtime is the time people must wait while someone else is taking a turn. This can be a problem even in a turn-based electronic game. Do people get bored waiting for their turn?
Is the game balanced. Even if the game is symmetric (all players start with identical situations), is there an advantage to playing first (or last). Chess is symmetric except for who moves first, but move-first is a big advantage.
Dominant Strategy. Look for any dominant strategy ("saddle point"). This is a strategy that is so good that a player who wants to win must pursue it; or a strategy so good that some will pursue it, yet that strategy renders the game less than entertaining. For example, in a Euro-style 4X game I've designed, one player found that by getting together a sufficiently large force, along with certain technology research, he could completely dominate other players who weren't pursuing the identical strategy. I want the game to offer a variety of ways to success, so I had to change the rules fairly extensively. This is why it is very important to have testers who are dynamite game players, so that they'll find these strategies during testing, rather than have someone find it after the game is published. I'm luck that I have one such player, and that I can be such a player myself when I put my mind to it.
Analysis paralysis. Are there too many things to watch for or keep track of, or too many choices, so players either freeze up or give up on figuring out what is the best thing to do? There are always "deliberate" (slow) players, the question is, is everyone slow or frustrated?
Rules difficult to grasp. What do the players find hard to grasp. (In my prototype Age of Exploration, players had trouble grasping the difference between movement of units and placement of units. I used the same distinction in an abstract stones-and-hexes prototype, and no one has a problem. Even if, after playing, players "get it", it might be necessary to change something. (In AoE I changed the rules extensively to recast/eliminate the distinction.)
What do players tend to forget? This isn't quite the same thing as what's difficult to grasp. Some rules just don't stick in people's minds. Is there anything you can do about it? Is there some play aid to help people remember?
What do players not bother to use? Some rules exist but no one uses them. If the threat of using them is not making a difference in the game, then perhaps you should eliminate the option. For example, in my hex-and-stones game Law and Chaos I originally allowed people to move a piece rather than place one. This happened rarely, as it was usually better to place another piece and increase the number on the board. So I eliminated the possibility, except as an "optional rule".
This was done in some haste, and I imagine I'll think of more.
Here are some items added from comments on boardgamegeek.
To see the discussion on boardgamegeek go to http://www.boardgamegeek.com/article/1810302#1810302
Adequate control. Do the players feel that they can exert a measure of control over what happens in the game? Remember, any (strategic) game is a series of challenges and actions in response to those challenges. (Harmony)
Horns of a Dilemma. On the other hand, are there enough plausible decisions in a play to make the players think, but not so many that "analysis paralysis" sets in. Even in a simple game, if a player can do only two of five possible actions in a turn, is there tension here or are the plays obvious? As one commenter put it, do the players sometimes feel "so much to do, so few actions"?
Player interaction. Do the players have to take the plays of other players into account? Yes, some games are virtually multi-player solitaire, and some players are happy with this. But most players want to be able to affect other players with their moves.
Taking it to the Max. Can extreme behavior within the rules break the game? Sure, if someone pursues a bad strategy, they'll lose. The question is, is there some extreme strategy that results in an unfair game?
Components and Play Aids. Do the physical parts of the game help play flow smoothly, or does something need to be changed? Is there too much record-keeping? How can it all be simplified?
Stages of play. You probably learn this in alpha/solo testing, if you do solo testing (which I strongly recommend). Are there identifiable stages in the game, especially ones where the typical run of play changes? E.g., in chess there is the early, middle, and end games. Pieces are deployed in the opening, mix it up in the midgame, and so forth. An exploration game has the expansion period followed by consolidation and then (usually) conflict. Etc.
Player interest/"fun". What part(s) of the game seem to be most interesting to the players? I'm not in favor of trying to figure out "fun", because fun comes from the people who are playing more than from the game design itself. And there are many games that I wouldn't call "fun" (including Britannia) that are nonetheless interesting and even fascinating.
Finally, remember Antoine de Saint-Exup'ery's maxim: “A designer knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away.”
Saturday, October 20, 2007
Sailors on the Seas of Fate
My title is a variation of a book title by Michael Moorcock (from the Elric of Melnibone series), a title that has always stuck in my mind.
I am a literal-minded person to an extreme, and as a result I am not good at making up analogies: I would rather discuss the reality than some comparison to that reality. However, I've come up with an analogy that might help students and teachers understand what's going on in many classes, so I'll explain it here.
We are all figuratively "cast upon the seas of fate" in a class, or in life as a whole. But we respond to that differently, in ways that are relatively easy to see in a class. Some are captains of sturdy sailing vessels, some are sailors on small boats, and some are castaways on makeshift rafts (or, like a message in a bottle, bobbing along on the waves without direction).
The Captains are guiding their ships, looking for the best winds and currents, keeping a "weather eye" at all times. They intend to do all they can to reach port. Some of the Captains have helpers (significant others, family, friends, mentors) providing support, some are "solo sailors". But all depend on themselves first of all to get where they need to go.
Their answer to the following two questions is to "disagree". "Luck plays a big part in what happens to me" and "When I'm required to do something (as at work or school) I do just enough to get by." They recognize that much of what happens to them is their doing, not someone else's fault.
Yes, these Captains can be thwarted by the "perfect storm", by circumstances (such as illness) that they truly cannot control. But they're doing their best to avoid those situations.
In terms of students, these are the ones who keep track of when work is due, who actually read textbooks (well, in the current generation, sometimes), who know what "study" means, who are trying to get an "A" rather than a B or C.
At the other extreme are the castaways, or the "message in a bottle", drifting along on the waves, hoping that they'll be carried to a good destination. Like a message in a bottle, they have little influence over where they're going. Like a castaway on a makeshift raft, they might have rigged up a sail, or they might have a paddle, but when things go wrong they often just throw up their hands and say "it's not my fault" instead of doing what they can to guide their fate. These are the folks who tend to blame everything that happens to them on someone or something else. They may indeed have difficult family circumstances, financial problems, and so on, but in the end it is usually a lack of WILL that will do them in.
Their answer to the two questions is to "agree". ("Luck plays a big part in what happens to me" and "When I'm required to do something (as at work or school) I do just enough to get by.") "It's not my fault" is their mantra. They may feel that the world "owes them something", but they find out that in the adult world that isn't true. The "world" is cruel and heartless. In classes, they are likely to fail, or to get really poor grades.
The third group, the sailors, are the "in-betweeners". They are looking to survive rather than to prosper. They try to guide their small boats, but often wish someone else was in charge, and sometimes aren't very diligent. Often they are looking for help. Sometimes they can get it, from family and friends and acquaintances (and teachers), sometimes not. In the end, in a class, you have to do it yourself, and sometimes they're up to it, sometimes not.
In classes they sometimes do what they need to do, sometimes not. They are often content with a C grade, or maybe a B. Their responses to the two questions are often in the middle, of course. They may learn better habits and become captains, or they may fall into the castaway category, or they may muddle along as sailors.
In a community college class you'll find a big proportion of sailors, quite a few castaways, and a variable number of captains. At Duke or UNC-CH you'll find a great many captains and few castaways, but still a goodly proportion of sailors. Many of the sailors will soon become captains, however.
Unfortunately, most K12 education is now designed to produce castaways far more often than captains. People are often told exactly what to memorize for the "end of class" test, regurgitate it, and go on to the next year, without having learned much, certainly without having learned good habits. What they do during the year in class doesn't matter much, what matters is the end of class test. Consequently the students are trained rather than educated. So in college, especially community college, we get many people who are ill-prepared to succeed in classes (or in life, unfortunately).
Where is the teacher in all this? The teacher is the Admiral, the Convoy Commander in wartime, trying to shepherd their fleet to the proper port(s) through dangerous waters. Unfortunately, the Admiral cannot sail every vessel; and when there's a straggler, the Admiral cannot stop (and endanger) the entire fleet for one member. The Admiral can only provide an example, and lead, and provide assistance when practical, and hope that all will follow.
So I say to students, imagine you are "cast upon the seas of fate". How are you going to react, what are you going to do about it?
Lew Pulsipher
I am a literal-minded person to an extreme, and as a result I am not good at making up analogies: I would rather discuss the reality than some comparison to that reality. However, I've come up with an analogy that might help students and teachers understand what's going on in many classes, so I'll explain it here.
We are all figuratively "cast upon the seas of fate" in a class, or in life as a whole. But we respond to that differently, in ways that are relatively easy to see in a class. Some are captains of sturdy sailing vessels, some are sailors on small boats, and some are castaways on makeshift rafts (or, like a message in a bottle, bobbing along on the waves without direction).
The Captains are guiding their ships, looking for the best winds and currents, keeping a "weather eye" at all times. They intend to do all they can to reach port. Some of the Captains have helpers (significant others, family, friends, mentors) providing support, some are "solo sailors". But all depend on themselves first of all to get where they need to go.
Their answer to the following two questions is to "disagree". "Luck plays a big part in what happens to me" and "When I'm required to do something (as at work or school) I do just enough to get by." They recognize that much of what happens to them is their doing, not someone else's fault.
Yes, these Captains can be thwarted by the "perfect storm", by circumstances (such as illness) that they truly cannot control. But they're doing their best to avoid those situations.
In terms of students, these are the ones who keep track of when work is due, who actually read textbooks (well, in the current generation, sometimes), who know what "study" means, who are trying to get an "A" rather than a B or C.
At the other extreme are the castaways, or the "message in a bottle", drifting along on the waves, hoping that they'll be carried to a good destination. Like a message in a bottle, they have little influence over where they're going. Like a castaway on a makeshift raft, they might have rigged up a sail, or they might have a paddle, but when things go wrong they often just throw up their hands and say "it's not my fault" instead of doing what they can to guide their fate. These are the folks who tend to blame everything that happens to them on someone or something else. They may indeed have difficult family circumstances, financial problems, and so on, but in the end it is usually a lack of WILL that will do them in.
Their answer to the two questions is to "agree". ("Luck plays a big part in what happens to me" and "When I'm required to do something (as at work or school) I do just enough to get by.") "It's not my fault" is their mantra. They may feel that the world "owes them something", but they find out that in the adult world that isn't true. The "world" is cruel and heartless. In classes, they are likely to fail, or to get really poor grades.
The third group, the sailors, are the "in-betweeners". They are looking to survive rather than to prosper. They try to guide their small boats, but often wish someone else was in charge, and sometimes aren't very diligent. Often they are looking for help. Sometimes they can get it, from family and friends and acquaintances (and teachers), sometimes not. In the end, in a class, you have to do it yourself, and sometimes they're up to it, sometimes not.
In classes they sometimes do what they need to do, sometimes not. They are often content with a C grade, or maybe a B. Their responses to the two questions are often in the middle, of course. They may learn better habits and become captains, or they may fall into the castaway category, or they may muddle along as sailors.
In a community college class you'll find a big proportion of sailors, quite a few castaways, and a variable number of captains. At Duke or UNC-CH you'll find a great many captains and few castaways, but still a goodly proportion of sailors. Many of the sailors will soon become captains, however.
Unfortunately, most K12 education is now designed to produce castaways far more often than captains. People are often told exactly what to memorize for the "end of class" test, regurgitate it, and go on to the next year, without having learned much, certainly without having learned good habits. What they do during the year in class doesn't matter much, what matters is the end of class test. Consequently the students are trained rather than educated. So in college, especially community college, we get many people who are ill-prepared to succeed in classes (or in life, unfortunately).
Where is the teacher in all this? The teacher is the Admiral, the Convoy Commander in wartime, trying to shepherd their fleet to the proper port(s) through dangerous waters. Unfortunately, the Admiral cannot sail every vessel; and when there's a straggler, the Admiral cannot stop (and endanger) the entire fleet for one member. The Admiral can only provide an example, and lead, and provide assistance when practical, and hope that all will follow.
So I say to students, imagine you are "cast upon the seas of fate". How are you going to react, what are you going to do about it?
Lew Pulsipher
Saturday, October 13, 2007
Inefficiency of big teams
Tyler Bello sent the following comment on size of games directly to me:
"I noticed your blog post's about team size/development time and wanted to add something. The business world ticks very slowly, this applies to big game studios. If you compare indy projects to big projects of the same caliber (regardless of sales/studio etc.) you will notice that the indy titles develop 10x faster than the ones from the big studios and with much smaller teams (or, at the same pace, with much smaller teams).
The first example that comes to mind is Project Offset. In only 1.5 years a team of 3 (1 programmer 2 artists) created an incredible engine. I would attribute this to the efficiency of small teams, the bigger the motor the less efficient it is with gas usage.
You can view the video under the downloads tab.
http://www.projectoffset.com/team.html
Another very small team company , whose games happen to sell wildly, is Introversion (Uplink 1 programmer, Darwinia/Defcon 2 programmers). They started with Uplink, which recieved mild sales but much praise. They then moved on to Darwinia which was a smash hit and now Defcon which went straight to Steam and is also very popular.
Minuscule teams can still make games that are just as good/beautiful/whatever as the big dudes, it's just not as common.
I just don't think that teams should be as big as they are and games don't have to take as long as they do to make. The process is drawn out by bureaucracy."
I've seen a few magazine ads for Darwinia, but of about 40 students I asked, only one had played Darwinia, and that only a demo (verdict: not so good). Evidently it isn't quite in the same category as Oblivion, Halo 3, Rainbow 6, and other very well-known games.
When a big publisher plans to publish a major game, they intend to spend a large sum on marketing. Introversion, as a smaller publisher selling games in less-than-top outlets, may not need to spend that kind of money.
Whenever large sums are at risk, companies will want to manage that risk, and monitor it. Managing risk is very important: if your project depends on very few people, then the risk that one or more of them will quit, or become incapacitated, or simply not be up to the task, is very significant. There's every incentive to spread the risk amongst more people, hence a larger group. There's also the notion that more people will finish faster, though this is sometimes regarded as a fallacy where programming is involved, generating a classic book (which I have not read, I must say) called The Mythical Man-Month. http://www.amazon.com/Mythical-Man-Month-Software-Engineering-Anniversary/dp/0201835959/ref=pd_bbs_sr_1/105-2718850-6987636?ie=UTF8&s=books&qid=1192314711&sr=8-1
Further, there will be more people whose job is to monitor and coordinate, because there are more programmers.
I have a saying: "The level of chaos increases with the square of the number of people involved. The level of chaos increases with the CUBE of the number of people IN CHARGE." And more chaos means less efficiency.
I'm also guessing that Introversion self-financed their games, that is, made the prototype and then found a publisher. If so, then Introversion assumed much of the risk, and could choose to risk depending on one or two programmers.
We need to remember also that in the "beautiful" games, there are many more artists than programmers involved.
"I noticed your blog post's about team size/development time and wanted to add something. The business world ticks very slowly, this applies to big game studios. If you compare indy projects to big projects of the same caliber (regardless of sales/studio etc.) you will notice that the indy titles develop 10x faster than the ones from the big studios and with much smaller teams (or, at the same pace, with much smaller teams).
The first example that comes to mind is Project Offset. In only 1.5 years a team of 3 (1 programmer 2 artists) created an incredible engine. I would attribute this to the efficiency of small teams, the bigger the motor the less efficient it is with gas usage.
You can view the video under the downloads tab.
http://www.projectoffset.com/team.html
Another very small team company , whose games happen to sell wildly, is Introversion (Uplink 1 programmer, Darwinia/Defcon 2 programmers). They started with Uplink, which recieved mild sales but much praise. They then moved on to Darwinia which was a smash hit and now Defcon which went straight to Steam and is also very popular.
Minuscule teams can still make games that are just as good/beautiful/whatever as the big dudes, it's just not as common.
I just don't think that teams should be as big as they are and games don't have to take as long as they do to make. The process is drawn out by bureaucracy."
I've seen a few magazine ads for Darwinia, but of about 40 students I asked, only one had played Darwinia, and that only a demo (verdict: not so good). Evidently it isn't quite in the same category as Oblivion, Halo 3, Rainbow 6, and other very well-known games.
When a big publisher plans to publish a major game, they intend to spend a large sum on marketing. Introversion, as a smaller publisher selling games in less-than-top outlets, may not need to spend that kind of money.
Whenever large sums are at risk, companies will want to manage that risk, and monitor it. Managing risk is very important: if your project depends on very few people, then the risk that one or more of them will quit, or become incapacitated, or simply not be up to the task, is very significant. There's every incentive to spread the risk amongst more people, hence a larger group. There's also the notion that more people will finish faster, though this is sometimes regarded as a fallacy where programming is involved, generating a classic book (which I have not read, I must say) called The Mythical Man-Month. http://www.amazon.com/Mythical-Man-Month-Software-Engineering-Anniversary/dp/0201835959/ref=pd_bbs_sr_1/105-2718850-6987636?ie=UTF8&s=books&qid=1192314711&sr=8-1
Further, there will be more people whose job is to monitor and coordinate, because there are more programmers.
I have a saying: "The level of chaos increases with the square of the number of people involved. The level of chaos increases with the CUBE of the number of people IN CHARGE." And more chaos means less efficiency.
I'm also guessing that Introversion self-financed their games, that is, made the prototype and then found a publisher. If so, then Introversion assumed much of the risk, and could choose to risk depending on one or two programmers.
We need to remember also that in the "beautiful" games, there are many more artists than programmers involved.
Subscribe to:
Posts (Atom)
"Always do right--this will gratify some and astonish the rest."Mark Twain
"A designer knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away." Antoine de Saint-Exup'ery
"Not everything that can be counted counts, and not everything that counts can be counted." Albert Einstein
"Make everything as simple as possible, but not simpler." Albert Einstein