?

Log in

MIT Mystery Hunt's Journal
 
[Most Recent Entries] [Calendar View] [Friends]

Below are the 20 most recent journal entries recorded in MIT Mystery Hunt's LiveJournal:

[ << Previous 20 ]
Saturday, January 21st, 2017
10:57 pm
[dr4b]
2017 writeups
I realize nobody uses Livejournal anymore (and with good reason), but here's my writeup of my experience during this year's hunt anyway:

http://dr4b.livejournal.com/1277723.html

Long as usual, even though it was a shorter hunt than usual.

I have seen some reddit threads talking about hunt, is there anywhere else to look?
Saturday, April 23rd, 2016
9:55 pm
[devjoe]
Saturday, January 23rd, 2016
6:17 pm
[dr4b]
Has nobody else written up this year's hunt yet on LJ?
I've seen a bunch of posts on Reddit and of course devjoe's stuff, wondering what I'm missing, without the tumblr this time around.

Anyway, here is my writeup of my experience this year.  As usual, it's way too long, and for some reason this year I'm less coherent than usual.  (I was on Up Late; we didn't do so well this time around.  That's okay, though.)

http://dr4b.livejournal.com/1274683.html
Friday, January 30th, 2015
3:20 am
[mystery_fish]
My 2015 Hunt writeup
I tried to limit spoilers to obvious things or to physical puzzles (which I'm guessing won't be something post-Hunt solvers can do). http://mystery-fish.livejournal.com/37537.html
Wednesday, January 28th, 2015
10:41 am
[dr_whom]
My 2015 Hunt writeup
Here are my two posts about this year's Hunt: my overall thoughts on it, and my comments on specific puzzles. I tried to be a little less spoilery than usual, but there are still some spoilers in the puzzle notes.
Friday, January 23rd, 2015
9:00 am
[dr4b]
My 2015 Hunt writeup
It's extraordinarily long (as in I went up against the 65536 character limit) and contains spoilery spoilers, but if you're on this community you probably played in the hunt anyway? Just warning, because Random is going to make one of the rounds available as a book (which I completely didn't work on anyway) and the entire hunt available just on a server to play whenever (though I still don't understand how the logistics of all the things requiring you to be at MIT or interact with characters is going to work), so they requested us not to put spoilers in our writeups, except it's really hard to do. But this was the first year that my team (Up Late) finished, so I really wanted to capture as much as possible.

http://dr4b.livejournal.com/1267170.html
Thursday, March 27th, 2014
11:41 am
[seekingferret]
Escape from Zyzzlvaria fanfiction
jadelennox hosted the Invisible Ficathon fanfic exchange, which just revealed its stories. It resulted in two Escape from Zyzzlvaria fanfics. Naturally, they both contain puzzles.



Where on the Brass Rat is Captain Blastoid (2780 words) by seekingferret
Chapters: 2/2
Fandom: Escape from Zyzzlvaria (Board Game)
Rating: General Audiences
Warnings: No Archive Warnings Apply
Characters: Zoey, Captain Blastoid, Scotchy, Harold, Ship's Computer, Ernie
Summary:

Admiral Kavouri arrives on the ship for a routine inspection, but Captain Blastoid is not there to meet him. Can the Admiral get to the bottom of this mystery, or will the crew of the Brass Rat drive him skittering up a wall?



Things to do in Zyzzlvaria when you're temporarily discorporated (5992 words) by marginaliana
Chapters: 2/2
Fandom: Escape from Zyzzlvaria, Doctor Who (2005), Welcome to Night Vale, Alice In Wonderland - Lewis Carroll
Rating: General Audiences
Warnings: No Archive Warnings Apply
Characters: Captain Blastoid, Algernon, Scotchy, Ernie, Zoe, Harold, Leah, Tenth Doctor, Donna Noble, The Man in the Tan Jacket, Cheshire Cat
Additional Tags: Crossover, puzzle, gratuitous plot device
Summary:

"Ye great blasted idiot!" said Scotchy to Algernon. "You canna recognize a plot device when you see one?"

Wednesday, January 29th, 2014
6:58 pm
[justsomesleddog]
Mystery Hunt Design Philosophy
It took me a minute to remember that I had a livejournal account that could post things here, but I wrote a (long) blog post about things we strove for in the Alice Shrugged hunt and, as I plan on directing Random to it, I welcome comments/criticisms/etc from other people that have written hunts before (or even just people who have participated in a lot of them). It's less about the logistical considerations (which I'd also like to communicate to Random) and more about general hunt philosophy.

Also, if you haven't noticed, I've slowly been updating the archives and there's some new-ish stuff up, including bonus back-up puzzles that weren't used in the hunt. If you were disappointed by the lack of a Dan Katz wrestling puzzle, then I have good news for you.

Runaround puzzles should be coming soon too, along with photos of events and kickoff and other things.
Sunday, January 26th, 2014
10:28 am
[dr_whom]
Friday, January 24th, 2014
7:46 pm
[emengee]
Back-End Stuff
I was reading a comment that brokenwndw had written in another post and agree with the following:

"Can I ask how you developed your processes? Those too seem very familiar, from the heavy up-front editor involvement, to the two test solve standard, right down to the term "final fact check" (which I remember adding to the puzzle status table late in 2011). I'm curious because Codexians have been wondering how much information is getting passed from team to team, and in what ways. It's really good to see "Puzzletron" (which gained that name from Manic Sages, but was written by Plant in the winter of 2010) continuing to be a key tool, but as far as I know there's no book or even crib sheet for communicating lessons learned. So, while it sounds like your process was pretty similar to ours or Plant's (from whom we got a lot of ideas), I can't tell if that is just coincidence, some kind of general idea osmosis, or specific communications."

I've been participating in the hunt for 11 years now, once as a member of a constructing team (though off-site when we had met with the previous constructing team, so I didn't get to see the cross-team communications), and I'm fascinated about the back-end parts of the hunt. How has the software evolved? How are different teams running things differently (for example, we did fact checking before test solving)? What humorous things have happened during the construction of the hunt? What unforeseen problems have teams encountered? What information has been shared from team to team? If it doesn't exist, can we make a communal body of knowledge about how to run a hunt (and how to not run a hunt)?

If any of this information already exists in a central place (and is being passed down from year to year), it would be great if we could work on amassing this sort of information, just as we're amassing information on old hunts for the archives. I think this sort of information would be interesting and helpful for anyone interested in writing a large-scale hunt.
1:21 pm
[coendou]
Mystery Hunt process post
I'm not sure how many people are on this comm who haven't written a Hunt before, so most of this is probably old news to most of you. But if you're at all interested in the writing process, I've written up a post about how we managed to turn it around and learn from our 2004 mistakes to write a Hunt that was actually solvable. If any of you read this and think "Well, of COURSE that's what you did, how else would you write a Hunt?" then well, you've just discovered why Time Bandits had problems.

(All opinions contained in this post, of course, are mine and mine alone and may not be shared by other members of Alice Shrugged. I do not claim to speak for the team. Obviously, not all members of Alice Shrugged were members of Kappa Sig! or The French Armada and vice versa, though I speak of them as the same team.)
Tuesday, January 21st, 2014
2:49 pm
[dougo]
The Tea Party meta
First off, I will join in the near-unanimous praise of the 2014 MIT Mystery Hunt.  Very smoothly constructed and administered, and I am grateful for the attention to smaller teams (though my team, Central Services, is neither small nor large, by the current standards).  Lots of fun and fair challenge and plenty of satisfying solving experiences.

But, my one big disappointment with our hunt experience is that we spent a huge amount of time fruitlessly trying to solve the Tea Party meta after having gone down a very inviting garden path, and I can't help but want to blame the puzzle presentation for that.  We did eventually solve it around 5pm on Sunday, but it involved a lot of hand-holding from Hunt HQ to get us off the wrong path.  I am curious to know if any other teams made the same mistake we did, and if anyone has general advice on how to avoid making this kind of mistake and/or how to recover from making this kind of mistake if it's not avoided.

Spoilers for the Tea Party metaCollapse )
Saturday, June 22nd, 2013
2:05 pm
[tacotortoise]
More warm-up puzzles posted
I have finally gotten around to posting my 2013 Mystery Hunt warm-up extravaganza. It is called "A Curtain Call for Borbonicus and Bodley," and is a follow-up to the 2012 Hunt. Visit http://tortoiseshellmusic.com/puzzles/ganzas/curtain-call-borbonicus-and-bodley to download the puzzles. Enjoy!
Monday, January 14th, 2013
8:45 am
[ericberlin]
Spaghetti
A few months ago, I tried a little experiment: I threw some random words together, told my friends to pretend it was a meta-puzzle, and had them try to solve it. And they did! Some of the "solutions" were downright eerie -- you could almost be fooled into thinking they had solved an actual metapuzzle, and not a bunch of words selected with the help of a random number generator. Other solutions went through absurd convolutions before reaching their answers, and these proved to be equally fun.

And that's how the game of "Spaghetti" was invented. Somebody chooses a buncha words, and everybody else tries to "solve" them.

In honor of the Mystery Hunt starting this week, I thought I'd post a few rounds of Spaghetti on my blog. Come see if you can find a good (or even a bad) answer to a completely fake metapuzzle. Or just read the comments and vote for your favorites.

http://ericberlin.com/?p=5200
Wednesday, January 23rd, 2013
5:10 am
[purplebob]
Puzzles and solutions are up
All Mystery Hunt 2013 puzzles are now publicly posted at coinheist.com, and the solutions are up at coinheist.com/solutions.

We expect to have solving stats and logs, as well as downloadable code to run dynamic puzzles such as Analogy Farm, Battleship, and Text Adventure, posted by Friday.
Monday, February 25th, 2013
10:37 pm
[rhysara]
Upcoming Weekend with WarTron Promotion
Hello World!

Here at the Boston GoToVision office we are getting excited about our upcoming Weekend with WarTron launch event and wanted to remind all of our fans that the promotional period will begin at noon EST this coming Sunday, March 3rd. The promotion will consist of a challenge quiz and twenty (20) winning teams will be invited to join GotoVision in Boston, Massachusetts on June 29th for a full weekend of fun activities to celebrate the launch of our new WarTron Game.

Check out our corporate website for more information about the challenge and be sure to check the special events page on Sunday for your chance to win!

http://boston.gotovision.net/event.shtml
Friday, February 1st, 2013
6:46 pm
[rhysara]
Just like this year's hunt itself, my writeup may have gotten a little bit long...

Part 1 | Part 2
Saturday, January 26th, 2013
8:16 am
[zandperl]
My review of the Hunt
My review of the Hunt with some very specific criticisms is here.
7:11 am
[snowspinner]
The Nature of "Hard"
King's Quest V famously has a puzzle in which a rat being chased by a cat runs across the screen. If you fail to notice this and throw a boot at the cat to save the rat then the game becomes unwinnable, though there is no indication of this at the time, nor, indeed, until much, much later. This is, by most contemporary standards, a legendarily bad bit of game design.

Meanwhile, elsewhere in the world of video games are things like Super Meat Boy and VVVVVV, which delight in cranking up the difficulty level, but have learned to eliminate most of the things that make that really problematic. The penalty for failure is negligible, checkpoints are plentiful, and dying a hundred times trying to make one jump isn't a big deal.

I feel as though this year's Mystery Hunt really highlighted the degree to which we need an aesthetic like this within puzzling. Because my problem with this year's hunt really comes down to the degree to which "really hard" was treated as an inherently good thing.

To pick a puzzle that hasn't gotten much stick yet, Circuitboard. I looked at this and was pretty sure what was going on pretty fast - I recognized the frames from Mega Man and figured it was a weapon susceptibility puzzle. But my reaction upon figuring that out was to walk away because this was absolutely miserable looking. It was going to require tracing lines across a massive grid and then working through a logic puzzle where a mistake was likely to blow up in my face an hour or two later, and with no clear indication of what the error was or how to undo it. There were tons of places where an error could creep in, and tons of ways that the error could remain invisible until late in the solving.

This isn't good. It's not that the scope of Circuitboard is unreasonable, or that anything it specifically asks for is unfair. It's not. The problem is that the puzzle had a preposterously high error penalty. A bigger problem existed with Portals. A chunk of my team spent fourteen hours on it before crashing out at the end because they knew they'd made some error somewhere in the past. That's just not good. Anything that invites fourteen hours of work to be wasted is deeply, deeply flawed.

Another example is Slithering Slumber. I liked this puzzle as an idea, but there's no reason for the snakes game to lack an undo feature. With it being impossible to tell what your next target is, the game amounts to a leap of faith: you have to die repeatedly to solve a puzzle. When this is combined with having to replay five minutes of the puzzle to get back to the spot where you made your error it's just annoying. The lack of undo didn't make the puzzle harder. It just made it longer.

This is a tricky thing to fix, and it requires a lot of thought, simply because the mechanisms to fix it aren't ones that have been invented. If you're making a platformer and cranking up the difficulty level you have mechanisms like fast respawning and infinite lives that people have already come up with. A non-interactive puzzle doesn't straightforwardly have those.

Way back when, Dan Katz posted some guidelines on how to write Konundrums. And one of his big ones were that checksums are good. This is something we should really be taking to heart in puzzle design in general. Solvers shouldn't spend too long trying to figure out if they're doing it right. Getting the a-ha or making progress on the legwork should feel like you're doing it right. Even little things can help here: if you've got a cluephrase, try to phrase it in a way so that most chunks of it look like they're plausibly language. Avoid chunks of letters that look like alphabet soup so that people can tell if their extraction is working (and, more importantly, tell when it's not).

Similarly, avoid bits where a lot of the "challenge" is copying something correctly. So much of what bugged me in Circuitboard is that you have to recopy the grid to work with it, but it's terribly easy to inadvertently miss a path or get twisted and put a path between the wrong boxes. Nothing about Circuitboard would have been harmed by an attached sheet that numbered the boxes and told you what boxes had arrows to what other boxes. It wouldn't have made the puzzle any easier, it just would have made it harder to have a clerical error waste two hours of your life.

Hard puzzles are good. I like hard puzzles. But the challenge should be figuring out what to do, not spending long periods of time wondering if you're doing it right. The act of solving a puzzle should yield feedback from the puzzle on whether progress is being made. It's fine to be stumped. It's not fine to think you're making progress when you're really just wasting your time. And this is something, I think, that we need to learn.

Also, just to dig up a hobby horse of mine for a few years ago, transparent unlocking mechanisms are a good thing. Not just how long to the next unlock, but a clear sense of how solving and unlocking are related and what a solve means for unlocking. Please, please, [Atlas Shrugged], bring this back.
Friday, January 25th, 2013
11:28 pm
[novalis]

As usual, Mystery Hunt was one of the highlights of the year for me. I really enjoyed solving, and I'm excited to go back and look at many of the puzzles that I missed. I especially enjoyed Halting Problem, Take-Home Final, and Vertexillology. I enjoyed working on the Danny Ocean meta as well, even though I did not contribute in any way to our solve of it. And there were several other interesting puzzles as well; I just read the answer to Git Hub and it was *brilliant*. Halting Problem, though, is the one I'm going to be telling people about for the next year.

I'm also not sure I can give a fair evaluation of this year's hunt. Our team was not at 100% because one of our teammates passed away a week before the hunt and because we were testing out some experimental solving technology. I know I didn't use my time as effectively as I should have. I also missed the obstacle puzzles purely because I didn't think they would be fun. But everyone else thought they were fun, so I should have gone!

After Codex won the 2011 hunt, Plant sat down with us and passed on some wisdom, which we shared with Sages. Because this hunt ran long, I worry that Sages might not have had a chance to pass this on to [Atlas Shrugged]. So I thought I would share here what we learned. I'm going to discuss it in the context of the length of the hunt, because that seems to be what everyone is talking about.

First, I should say that, unlike many (most?) of the people on Codex, I would prefer a long hunt. Had I known to expect the hunt to run until Monday, I would have gotten a third night at my hotel and been perfectly happy. However, I also really wish I had seen endgame, which argues strongly for a hunt where the winning team solves many hours before HQ closes. So, I guess scheduling the winning team to win Sunday morning, with HQ closing at say 9pm Sunday with wrap-up on Monday would be the best of all possible worlds.

Also, announcing this several months in advance would be nice for those who have to buy plane tickets and reserve hotel rooms.

OK, but why did the hunt run so much longer than intended even with free answers and hinting? Based on my own experience, and comments from others, I identified five reasons, along with their solutions:

1. Number of puzzles

This, alone, is insufficient to explain the length of the hunt; if the puzzles had been equivalent to 2011 or 2012's, the hunt would have been over Sunday evening.

However, it was a contributing factor for a reason beyond the obvious one: more puzzles accepted means less suboptimal puzzle ideas rejected. I don't know exactly how to compare cmouse's report of 400 puzzles submitted to Codex's 350 entries in our database (ours included events and a few other misc things, including endgame). I can say that the (much smaller) MoMath hunt had a 3:1 ratio of rejected to accepted puzzles.

2. Slogs

The 263-clip song ID puzzle has been mentioned a lot. I didn't look at it, so I don't know if I would have enjoyed it. But it wasn't the only puzzle that involved a very large amount of research. I believe that puzzles of this sort contributed to the length of the hunt. Another example of this was Permuted, which was practically a mini-hunt on its own.

2012 had its own slog puzzles; one of mine would have fit in that category but my editors made me fix it. (Every hunt will have a few, and there are people who like that sort of thing; the goal is to not have a lot).

I understand that Sages were short on editors. Codex solved this problem by promoting people with no experience to the level of editor. I believe that it did help Codex; I was one of those totally inexperienced editors, but I learned on the job.

3. Weak final extractions and other cluing.

This brings me to the next issue that more editors (or more aggressive editors) might have helped with. Some of the clue phrases ("PROB HALF SOLVED", "[latin for losers of superbowl forty] vi" (where the vi was spurious), and other extractions (I've heard complaints about Uncharted Territory and Mergers, although I didn't work on them so I can't say).

Both slogs and weak final extractions would also have been reduced by more testing. My understanding is that a single successful test (even if somewhat rough) was considered sufficient by the Sages. Codex's policy, as suggested by Plant, was that two clean test-solves were required. Why two? Because you never know when you're going to get a tester who happens by pure luck to be on the same wavelength as the author. And test-solving can be easier than hunt solving, because it is typically not as time-limited. Plant reported that in both 2006 and 2011, they allowed in one puzzle with no clean solves -- and that puzzle was the only one never solved during the hunt. In 2012, Codex always had at least one totally clean solve (and two for 90% of puzzles), and during the hunt, every one of our puzzles was solved forward by at least a few teams.

Our testers also push backed and complained about puzzles that they found boring or too hard. Total speculation: this may be a cultural thing; the culture of Sages is relatively unique because of its origins at Mathcamp. When Codex was watching Sages on the runaround last year, we couldn't stop cracking up every time one of the Sages' leaders would hold up a hand and suddenly all the rest of the Sages would sit down and shut up. It was funny because Codex has no leaders, and would never shut up no matter who was asking (or how good the reason).

For weak final extractions, 11 Secret Herbs and Spices was probably the worst offender (and is a good lead-in to the next category). Using thyme twice in one blend was extremely tricky and combined with the erroneously missing blank to render the puzzle unsolvable. It would have been a lot of fun had it worked!

4. Errata

Sages released roughly a dozen errata during the hunt. Codex had 2 (one serious, one spelling error). I think Plant had two (plus an early-release glitch).

I don't know what the Sages fact-checking process was; Codex's was to fact-check twice before testing, and then to do a final fact-check after the puzzles were converted to final HTML.

Fact-checking was facilitated by requiring clean solution documents.

In addition to the obvious reason, errata slow down hunts for a non-obvious reason: they reduce solvers' trust in constructors, causing solvers to continue to pursue theories that have been disproved by only one piece of evidence, just in case the data is bad.

5. Number of ahas

Many otherwise lovely puzzles this year had one or two too many steps. 50-50 is the example that everyone is pointing to here (it also suffers from weak extraction); Sam's Your Uncle is another. More aggressive editing and testing would have caught these issues.

Despite all of these issues, I still believe that 2013 was a successful hunt with many lovely puzzles. I'm going to go back and try some of the ones that I missed. Thanks to the Sages for all their hard work!

[ << Previous 20 ]
About LiveJournal.com