Assorted things about the new MacBook Pros

Hello! Nice to see you. Do you want a snack? I have… oh just Ryvita. Sorry. Hey, I've got some notes on the new MacBook Pro, and especially its Touch Bar.

You can customise the Control Strip (the small section of system controls on the right-hand side of the Touch Bar), as shown during the keynote. But there are two levels to the customisation: you can select what appears in the collapsed view, as well as what appears in the full expanded Control Strip view. When editing, the icons wobble, just like iOS or Launchpad. When you add something, you can change it position by dragging it on the Touch Bar in this view.

You can also customise how it appears in other ways. You can set it to be the extended Control Strip by default, ignoring app-specific inputs. You can set it to be all app-specific inputs, ignoring the Control Strip. You can make it so that pressing the Fn key brings up the expanded Control Strip instead of the function keys, if you never use the latter.

Oh, and the Escape key was pretty much always present in the apps we saw used with it…

The tools you see are not just based on what app you're using, but what you're doing in the app. It's really contextual. We saw this in the keynote, but I had to see it to really catch its potential. In Keynote, it showed options for styling text when text was selected, even though the text-styling pane wasn't selected in the sidebar. The Touch Bar simply puts more options at your fingers, with fewer clicks required. Yes, some of them were options experienced users would simply know how to trigger with a keyboard shortcut (rich text BIU was there, for example), but things like making colour adjustments take a bunch of clicks even once you've brought up the option. Here, you get to keep the screen clear of extra control panes for a smaller task, and keep the focus on the options that are most important to the bulk of your work.

The details some tools go into and the level of context the Touch Bar can operate in means this really doesn't feel far from a touchscreen Mac. At the keynote, you didn't get the sense of how connected to the app the Touch Bar actually feels. But there's an argument it can be even better than a touchscreen Mac at times, since you don't need to obscure the screen to use it. You can have an image fullscreen in Photos, and make adjustments with fine-grained controls in Touch Bar, with the photo in full, detailed view.

Apple talked with me about how the ability to work with one hand on the mouse/trackpad and adjust tools with the other isn't common, even though it should be quite an optimum way to do things. It's interesting to note Microsoft's announcement of the Surface Dial so soon before Apple revealed the Touch Bar – both companies were looking at this same problem. The Surface Dial enables you to place a physical dial on the Surface Studio's screen, and you'll trigger a tool, which can be adjusted in increments with the dial. The same is possible with the Touch Bar, in a slider rather than a dial. So you can be drawing in a painting app, adjusting the brush size or hardness as you create one flowing line.

Apple's approach doesn't obscure the content, whereas Microsoft's does. Microsoft offers a much bigger screen on the Surface Studio to avoid obscuring that content, of course, while Apple's approach is in a notebook right now.

I'm a tentative convert to the Touch Bar right now. I haven't used it outside of a demo yet (a review sample will come soon), but seeing its use in the real world really impressed me. I do wonder if there will be problems of "predictability" about what I'll find there – if I'm not sure exactly what tools I'll see because I'm not sure what I clicked on last, hunting for the right icon there may be no quicker than clicking through some menus. This is the problem 3D Touch on the Macs has – you don't know what something will do until you try it, and it may not be what you're expecting. Only practical use will tell if this is a real problem. It may even be a problem in some apps and not others – like any interface element, it can be used badly.

The Touch Bar screen itself looked a little duller in person than I was expecting. Not unclear by any means, but there seemed to be more glass between the screen and surface than I expected, and it didn't seem as vibrant as the Watch display. However, the Touch Bar has its own ambient light sensor, and adjusts to lighting conditions automatically, with no manual adjustment possible, so it may have just been a quirk of the demo room – again, I would have to test in real life.

It's powered by the T1 chip, which includes the secure enclave for Touch ID, just like other Touch ID devices. Touch ID can be used for payments and locking Notes etc, just like on iOS. If you haven't logged into your Mac in 48 hours, or if your Mac has been powered off, you'll need to enter your password, again just like iOS.

The T1 doesn't have any additional security operations, according to Apple. It confirmed that it drives the camera, but said it didn't offer any extra protection, as I saw suggested elsewhere.

A couple of technical points:

16GB of RAM. I've seen some of the explanations online, but they don't add up to a whole answer that I'm satisfied with.  Apple said that it's a limitation of LPDDR3 that it's only possible to include 16GB. I asked what the limitation was, but it wasn't able to tell me immediately – I'm hoping to follow up with that information. Apple didn't use DDR4 (though that could have been used with this Intel chipset to reach 32GB of RAM) because of its higher power draw. I asked if Apple knew exactly what impact on battery life that would have, but that data wasn't available to hand. I'll follow up there as well.

Apple also said they think the ability to page things to the SSD so fast – 3GB/s in these models – means that pushing some open app data to main storage isn't the bottleneck it once was. But it did also acknowledge that it's not ideal for edge case users. Apple pointed to its desktop offerings for higher-end specs.

Some USB-C charging cables are rated for a lower wattage than the 87W adapter of the 15-inch Pro – specifically, Al Stonebridge pointed out that Griffin's BreakSafe cable which adds MagSafe-like charging back in (more on that in a moment) is rated for 60W. The 15-inch model will slow charge through that cable, though if you connect several peripherals and start doing high-end work, you might see a net loss on the power use even when connected to the power with that particular cable.

I asked about the downside of losing MagSafe in general. Partly Apple said it's so excited about the opportunities of USB-C/Thunderbolt 3 that it was just one of those things that had to go in the process. But partly, the idea is the same here as it was for the 12-inch MacBook: that its battery should last for a full work day, so you won't tend to have it plugged in in a position where you're likely to trip over it. The intention is that, much like a phone, you charge it overnight when it's not in use, and then roam free of cables when you are using it.

A slightly random aside: yes, it's possible to plug four charging cables into the MacBook Pro at once. No, it won't draw power from all of them. Whichever one you plug in first will be prioritised as the power source.

---

Of course, I asked about the ports. Specifically, I asked about whether Apple considered ramping up the change more smoothly with maybe two Thunderbolt 3 ports and a few legacy ports, but it pointed out that it was not traditionally Apple's way to do it like that. It doesn't do things by half.

Apple wanted these MacBook Pros to be thinner and lighter. They're considerably less voluminous too. To get there, it decided that the larger and more cumbersome ports had to go. Apple appeared to accept the criticism on things like not being able to connect your iPhone to your MacBook Pro out of the box. It didn't brush it off as immaterial, but the suggestion seems to be that adapters will get you through until you can catch up with upgrading your accessories. (A specific point was made of noting that the Thunderbolt 3 to 2 adapter was useful for owners of current MacBook Pros, since they could use it with their existing notebook to buy and use Thunderbolt 3 or USB-C accessories now, while future-proofing for getting one of these MacBooks down the line.) Yes, there was an expectation that spending the extra money was no problem – though while Apple's adapters aren't cheap, it also talked about getting much cheaper cables from Amazon.

I’ve seen the argument that unless Apple “rips the Band Aid”, people won’t ever really make the transition to USB-C products. Maybe that’s fair, but a count to three before it pulled could have helped people steel themselves.

I mentioned on Twitter recently that one of the sharpest things I saw about the ports change on the new MacBook was from Phil Ewing.

“The issue is Apple has placed all the burden on users, rather than assuming any of it with the new hardware.”

It made me think of Steve Jobs' famous talk about what computers can be – the ideal of them.

“I think one of the things that really separates us from the high primates is that we’re tool builders. I read a study that measured the efficiency of locomotion for various species on the planet. The condor used the least energy to move a kilometer. And, humans came in with a rather unimpressive showing, about a third of the way down the list. It was not too proud a showing for the crown of creation. So, that didn’t look so good. But, then somebody at Scientific American had the insight to test the efficiency of locomotion for a man on a bicycle. And, a man on a bicycle, a human on a bicycle, blew the condor away, completely off the top of the charts.
And that’s what a computer is to me. What a computer is to me is it’s the most remarkable tool that we’ve ever come up with, and it’s the equivalent of a bicycle for our minds.”
Steve Jobs

Any tool, including phones and computers, should assume people’s burdens. What else do they exist for? If you're an existing Apple notebook user, then Phil Ewing is right, this minimum $1499 notebook is putting a burden back on you. Now you must consider having the right adapters with you for whatever situations may come up in your work. This isn't that big a deal on the face of it. You can solve an awful lot of problems with these two adapters from Satechi:
3-in-1 Combo Hub
Weighs barely anything. No problem to just store it in the bag you carry your MacBook Pro around in.
USB Type-C to Type-A Adapter
This has a loop so you can attach it to your keys, so you'll almost never be without an adapter for USB-A (and if you need to connect something more obscure, there's a better chance you can find the other adapter where you are. After all, the older MBP only had SD cards and Thunderbolt otherwise.)

But whether you can notionally solve the problem isn't the point. You're now at risk. There is now a higher chance that you'll be caught in a situation where you might NEED to connect something to your MacBook that you can't, because you forgot the adapter. This burden, and its mitigation, is now on you.

The counter-argument here is that the progress of its new design may relieve more burdens than it adds. Does thinner and lighter do it for you? Is the Touch Bar enough of an addition for you? What about the brighter, wider color gamut screen? I think you should try one in the store before you make a decision on this if you’re considering the upgrade, to give the Touch Bar especially a real test, but for sure the balance of this equation will not fall in the MacBook Pro’s favor for everyone. I'm looking forward to the chance to assess it properly, but for me personally so far, and the work I do, the balance of changes seems to come down in the MacBook Pro's favour.

Fitting it all in: Apple's September 2015 keynote

Today's Apple event gave me thoughts and feelings, which is the unfortunate side effect of spending so much time following the industry, so I'm going to write some of them here (since the tweetstorm I occasionally expel after such things is not exactly ideal for nuance.)

Watch
The new colours of Sports models and the new bands look like a great mix. I feel like, for all the personalisation options the Watch currently offers, it's still just a bit too samey – even for the middle class people I know, they can't justify going beyond a certain few models, leading to a level of aesthetic homogenisation that just isn't becoming for something that Apple itself acknowledges is immensely personal. These new colours are great options, and combined with third-party bands (we're going to have a massive round-up of third-party bands in Mac|Life before Christmas), we're getting better options for customisation without breaking the bank.

iPad Pro
It's big. Biiiig. Big in size, and potentially, big in impact. The performance levels they're talking about for the A9X chip would suggest MacBook-matching levels of capability (depending on any number of factors that aren't explained in such simple terms as Apple goes into during the presentation).

It matches basically what I expected: a 13-inch screen, most powerful processor in the line, 10-hour battery life, pressure-sensitive stylus, though not, interestingly, a pressure-sensitive screen. (Actually, I had predicted a 12-inch 3:2 screen in the latest MacLife, but this was always the more likely option.)

I suspect early views of it will struggle slightly with its size and weight – it weighs the same as the first iPad, but over the larger area, that weight will be more cumbersome. It's still lighter than the MacBook, mind, for a larger screen – but without the built-in standing abilities of a laptop. The keyboard cover (hello, Surface! Funny to see Microsoft effectively abandoning its own tablet in that keynote for this one, but I think it's the right decision for it) will increase its weight, and costs a frankly alarming $169 – maybe not the automatic addition it perhaps should be.

I think the larger screen is a great opportunity, for both Apple and developers – making use of that space (and making an app that works really well in Split View) present yet another chance for apps to differentiate themselves. And that goes double when you add in the pencil (though that's always going to be a relatively niche add-on).

Lacking 3D Touch (I'm so pleased they seem to have renamed it from Force Touch) is quite a frustrating omission – it leaves the rest of the iPad line sitting in a holding pattern this year. The iPad mini 4 is, frankly, what the iPad mini 3 should have been last year. It's possible Apple is only happy with the UI for 3D Touch on the iPhone right now, which is fair enough, but it still makes for a fallow year for those of us who love the 9.7-inch iPad.

Apple TV
(Hello, yes, I'm going to use the term 'casual gamers' several times here. It's a meaningless term! It is. But it serves a purpose here, for the many people who play games (often free-to-play) from the App Store who don't tend to invest heavily in consoles/PCs and $60 games.)

This is pretty much exactly what I want personally, and I'm looking forward to grabbing one hugely. Siri with deep-searching into content from various apps is ideal, and the controller seems like a good design. I'm excited to see where apps on the TV will go – I can't say I'm fired up much by Gilt, but the thing about the App Store has always been that best stuff comes out of left field. 

Anyone who knows me knows that most of my opinions centre around the games side of things, of course. I thought the Harmonix game had all the polish you'd expect, but it all looked a bit… 2007. We know that Apple TV will support game controllers as well as the motion controls of the default controller, and Transistor (which was featured) is a very fine console port, but anything beyond it is going to be difficult.

Apple was never going to aim at the high end, and it makes sense for Apple to take its dominance in casual gaming and make the leap to TV. If you're making apps for this new platform, why not make games, since people like playing games on your other devices?

I just don't think it's going to make any significant impact. I think the casual players it's theoretically best for (and Apple seems to be aiming at judging by featuring Crossy Road and Beat Sports) will largely bounce off it. The thing is, casual gaming on iOS takes place at a small, personal scale. You fire it up on your phone while waiting on the bus, or bored at work, or watching trash TV. But gaming on your TV requires dedication. Specifically, it requires dedicating the centrepiece of your living room to this pursuit – to intend to dive into that gaming experience to the exclusion of all others you could be doing on the TV at that time, or that someone else in the room might want from the TV while you share the room.

Is this how most of the casual gamers playing App Store games feel about games? I doubt it. If they buy Apple TVs, I'd expect the games to stay on the small screen, and Netflix to stay on the big screen.

iPhone 6s
3D Touch is a really interesting implementation of the technology. It's been used so much more simply elsewhere – just bringing up hidden options. Here we see a smart focus on using it to reveal information as well as offering shortcuts – a way of letting you do things with the fewest possible taps. I think this is a theme in iOS 9, from its proactive elements, to its better use of the Share button for creating reminders and notes, to the 'Back to' button in the top left after you follow a link between apps. Again, it'll be fascinating to see devs take this interaction method and run with it. 

The new camera and 4K recording both looked… well, sort of the same as most other video and images probably would on the stream on my MacBook Pro, but still great. Some poked fun at Apple for delving into the technical details of the camera's pixel arrangements, but I really appreciated it. If Apple moved to a 12MP sensor, I had questions about maintaining image quality, and this answered them. Did it mean anything to most people? No, but most people weren't on the stream. When they read about the new phone, they'll get the headlines – but those of us who cared got the detail we needed.

The camera is the key upgrade in this version for many people, make no mistake. The phone has, really, two new features – and while 3D Touch looks great, relatively few people care about something they've never used on a phone before. A big step up in the camera's numbers is something that appeals to EVERYONE, though.

4K video is an interesting one. I'm a little bit amazed that they're introducing this into a phone with a 16GB model. 16GB is already extremely limiting – Apple's implementation of cloud services simply doesn't ease the burden on the storage enough, as the company has claimed (I think it was Phil Schiller during his talk with John Gruber). Adding larger still images and four times the video resolution is going to make that a lot worse (especially when adding in 'Live Images', despite what Apple says about it not taking up that much more space per image).

It is possible that Apple is using H.265 encoding for its 4K video, which would make a big difference here – keeping the video file sizes comparable to 1080p videos encoded with the current H.264 encoding. It might be that the A9 chips have hardware acceleration for this, which is perhaps why the iPad Pro and iPhone 6s were both mentioned as being capable of 4K video editing in iMovie, but there was no mention of support for it in iMovie for any other device – including Macs. That's just speculation on my part at this point, but I hope it's accurate, because it mitigates at least one element of what could otherwise be a farce for anyone who buys the 16GB iPhone.

Overall
I thought this was a really strong event – though it helps when you have your three biggest products of the year to announce. No fluff (outside of the usual Apple hyperbole), no lengthy explanations of middling music services (ahem), no messing around.

I think all the products look potentially great, with the proviso for the Apple TV and iPad Pro that they'll need great support from developers to reach their peak – but I doubt anyone believes that won't happen. The iPhone 6s is pretty much the quintessential 'S' iterative upgrade, but that's no bad thing.  They've nailed speed and the camera, and I really do believe those alone are enough to make it yet another resounding success.

In a year, we've had the Watch, the Apple TV and the iPad Pro. If anything, the only problem now is that too much of our rumour fun has been taken away. Sure, we've got the car, but that's probably at least five years out. 

Oh, hang on, Apple's making Mac-level processors now? Yeah, that'll do.

Square Enix set itself up for a Fall

Today, Square Enix announced a new game in the Deus Ex series. This should be very good news. It has not been received as very good news by lots of people, though. In fact, Square Enix saw fit to move the time of its announcement (and the embargo for when sites were allowed to talk about it) from its original slot to the exact time that all the reviews for The Last of Us went live, almost as if it were trying to bury it.

Perhaps Square was trying to bury it because it began to suspect the announcement would bring in comments like these: 

EduardoFedrizzi at Joystiq
"HAHA 'The Fall' is definitely a fitting name for this game."

 Fallout at Eurogamer
"I didn't think it'd be so possible to lose interest in this so fast."

Sieroa also at Eurogamer
"Sigh. Here i was waiting for a sequel for Human Revolution."

 TheCrimsonFenix at CVG
"This goes up there with most disappointing bulls**t of the year."

Of course, there were more positive comments too, and yet I predicted these responses would come yesterday (just ask Tap!'s editor, Christian).

So what's gone wrong? The timeline starts with the news that Square registered domains relating to title Deus Ex: Human Defiance, which drove speculation that this would be the name of Human Revolution's sequel, though later Square confirmed that it would not be (after using the name for an April Fool, in fact).

But then domains for The Fall were spotted, followed by a five-second teaser trailer yesterday. Now, with Human Revolution being a PC, Mac, PS3, Xbox 360 and soon Wii U game, the natural assumption from many was that the forthcoming game would be the sequel they were hoping for. Some sites knew already that this was not the case, but couldn't say anything. Others, even big sites, didn't know, so inadvertently contributed to the eventual feeling of disappointment: Alec Meer at RockPaperShotgun reluctantly posted the teaser, adding that "a new Deus Ex game is big enough news round these parts that it overwhelms my antipathy towards trailer-for-a-trailer nonsense".

Really, then, disappointment was inevitable for those expecting a full Deus Ex sequel. The fact is that the quality of what was actually announced – an iPad game with a whole new story purporting to offer the full Deus Ex experience – became irrelevant. 

Which is a shame, because what Square Enix actually announced is very exciting. I cannot recommend strongly enough that you wait until next week before making any judgements on whether making The Fall as an iOS game is a good idea or not.

The problem is not that The Fall is an iOS game. It's that Square let people think that it wasn't.

Update

Mark Brown of Pocket Gamer says on Twitter: "I think it was a mistake for Square to put an embargo on actual impressions of Deus Ex, so we can't tell you how it feels on a touchscreen. "

Square could have let all the journalists who've already had a chance to try the game at least temper the criticism of 'this kind of game will be rubbish on iPad' with our thoughts on what it's actually like. Instead, we just get to sit here and watch opinion form against it.

Are games reviews buying advice or art critiques? SimCity makes me wonder

SimCity is out – sort of. The servers have been struggling, meaning that many people are unable to play, or even to access their cities, because you need to be able to access the servers to play. Which is all sorts of crap.

Polygon originally gave SimCity a 9.5 out of 10 score, based on their experience with the game in optimum conditions before its release. Then, the game launched for real and it was significantly broken. So Polygon lowered its score to an 8.

When the first reviews of SimCity were going up, I found myself wondering if the high scores would be warranted if it did indeed turn out to be borderline inaccessible. When that came true, and Polygon lowered its score, I then found myself wondering if the original score wasn't still appropriate, presumably because I'll even be contrary with myself if no-one else is around to be irritating at.

At what point do we separate the content from the delivery method? Ben Kuchera at Penny Arcade Report likened it to critics seeing a film at a cinema and giving it high marks, followed by the cinema burning down, so no one else could see it. I don't think that's quite the right analogy, though, because you can simply go to one of any number of other cinemas in most cases, or even wait for it to arrive in another format (DVD, TV, online…).

If there's a real-world analogy to be drawn, I see it more like a painting in a gallery. There's only one place you can go to see it, and a critic might label it as a true masterpiece when they see it at, say, an invitation-only event at the gallery. Oh, and tickets to the gallery can be bought in advance and any time, regardless of how many people can fit in the gallery.

Opening day arrives. Everybody who pre-booked a ticket turns up, and the queue dominates the whole street. Only a lucky few at a time can see it. There's no other option, of course. You can give up and go home, or you can wait – but bear in mind that you've already paid for the ticket.

Has that work of art become any worse? Should the critic revisit it in light of the impossibility to guarantee that you'll be able to get in to see it if you buy a ticket? Should the work stand on its own, but with a warning about the difficulties of seeing it noted separately?

Of course, art isn't given a rating out of 10. That's a key difference. You can praise one aspect while criticising another, without having to work how that averages into a score. I don't know why these server issues are worth a docking of 1.5 points exactly. Why not a 7? Polygon's review policy lists a 7 as: "Sevens are good games that may even have some great parts, but they also have some big 'buts'."

More dramatically, why not a 1? From Polygon's policy: "A score of one indicates that Polygon review staff believe said game doesn’t properly function. Most reasonable people will not be able to finish a game with a score of one due to massive technical, design, and execution problems." It seems to fit.

I don't actually mean to pick specifically on Polygon here, even if I sort of am. I happen to think that review scores do have a place as a way for readers to judge comparable games (though when you try to compare across genres, the usefulness dissipates somewhat). I also think that Polygon's policy of updating its review scores as games change over time is bold, and a smart way to deal with the increasing number of online games that will build on themselves, and likely improve.

But it raises a kind of existential problem for the idea of games reviews.

"Maybe it would be better if reviews got rid of the numbered score, at least for a bit, just to say 'not yet'," Kuchera noted, while wondering if SimCity was immune to the idea of a classical review because of all this.

I'm now wondering whether a review of a game is buying advice for customers, or whether it's a critique of form for enthusiasts. The former would surely tear SimCity a new sinkhole, but the latter could reasonably ignore its technical issues.

Of course, there's no 'right' way to review – a publication might do either of those, but I think this example raises the question of whether it's possible to do both consistently, and whether it will be in the future.

--------

PS Perhaps the closest analogy for SimCity's release is House of Cards on Netflix. It got positive early reviews, but if Netflix had gone down and it had been unavailable, would those reviewers have revisited their score? Would we have been saying that they shouldn't have scored it highly until we knew if Netflix could cope with the demand for it?

Sony's mistake: it announced the generic next gen, not the PS4

I've been thinking further about why the Sony announcement fell so flat for many people, despite some interesting moments, and I think it's related to the failure to actually show the PlayStation 4 unit.

What Sony actually showed off was a hardware architecture, a revamped PlayStation Network service, a couple of exclusives and a bunch of games that will be multi-platform.

What Sony did not show off was anything we can really latch onto as being PlayStation.

The architecture is supposedly nearly identical to what's going into the next Xbox, save a few rumoured advantages in Sony's design. So it doesn't really tell us anything unique about the console – and this ties into most of the game announcements. Watch Dogs; Final Fantasy; The Witness; Destiny – none of these will be totally exclusive (The Witness will also be PC/iOS, and FF will probably get an Xbox release).

Announcing all these doesn't herald the future of PlayStation, but the entire next-gen. Hell, some of them are even coming to the current gen, and Watch Dogs was announced for Wii U just after Sony's event.

And the exclusives didn't help. Killzone really couldn't look much more generic – until the appearance of a Helghast at the end, it could've been a Halo game, a futuristic CoD game, anything. InFamous was never unique, having launched with Crackdown and Prototype. I'm not sure if DriveClub is actually exclusive, but it again could be almost any racing game, even I do think its first-person gimmick is really nice.

And the PlayStation Network stuff is a feature, not a product. It's an adjunct to the core experience, not the what drives someone to the console in the first place. And its features – social, streaming, recommendation engines – aren't unique either; they're just being brought together really well.

Where was all the stuff that gets you excited for a new PlayStation?

A Gran Turismo reveal would have given Polyphony the chance to talk about the ludicrous physics detail that goes into the game – an infamous level of meticulousness we associate with that brand, and to its ties to the PlayStation. A Naughty Dog appearance could have given them the chance to talk more about the story-telling possibilities they see in the new hardware, with a level of sheen that we associate with the Uncharted series, and subsequently with the PlayStation. And bringing out Ueda and The Last Guardian would've… well, solved everything, really.

Even the promise of an indie-friendly store is hollow with a Papo & Yo or similar to exemplify it, and Media Molecule's charming demos on the ways Move can be used struck me as ideas that could equally apply to Kinect.

It might be a small thing, even insignificant in the grand scheme of the larger announcement, but showing off the actual console would have given us a glimpse at this particular slice of the next gen. The closest we can get to a tangible representation at this point.

Without that focus, Sony failed to make its PlayStation 4 launch about its own console. It was about the hardware we'll see across the industry. It was about the shifts to indies and alternative control schemes that are happening everywhere. It was about cross-platform games. It was not about the PlayStation.

The long wait for the PlayStation 4

Earlier today, I said on Twitter that I was surprised at the number of complaints from tech pundits that Sony didn’t announce a price or more exact specs for the PlayStation 4 at its announcement event, and I pointed out that there would be a good eight months or so before it actually launches, so this stuff didn't need to be announced yet.

Gary Marshall tweeted back to say: “That’s the problem right there. Ridiculous gap between launches and shipping.”

We’re all used to Apple announcing something and sticking it in the shops just a few days later, with many other companies doing the same thing these days (BlackBerry Z10 was in stores pretty much the same day it was revealed). But for consoles, I don’t think a long lead time is unreasonable, because each one isn’t just a new gadget; it’s a new development platform.

New platforms don’t tend to launch so quickly, because developers simply need time. Windows releases go out to developers long before they're released properly. The iPhone SDK was announced in March 2008, but didn’t launch until July that year. Any new hardware is built on the same base as is was then, so developers need to put in minimal support, but that won’t be case for the PS4 – it’s an entirely new architecture compared to the PS3 (or the Xbox 360). Some developers have already had time with the dev kits, we know, but consoles are malleable in these early stages, with the amount of RAM and the hardware clocks speeds all likely candidates for tweaking. These again slow development down.

A console isn’t like a phone or a tablet or a laptop. It isn’t part of an environment where the sands are constantly shifting – new phone releases overlap each other constantly, driving a release cycle that’s reliable but takes relatively small steps. Announcing a tablet eight months early just guarantees that another manufacturer will catch up or overtake you by the time you get to release. Consoles don’t have that disadvantage (Sony knows Microsoft can’t leapfrog it without without a commensurate rise in costs), meaning that announcing eight months early is a chance to work on building your hype and excitement.

Crucially, a console is not a multi-purpose device. Pretty much its sole selling point is its games roster, and the release needs to be crafted around building up to that. Announcing now has given Sony three big opportunities to show more games between now and launch: GDC in March, E3 in June and TGS in September. It’s enough time for smaller studios, who may not have had early dev kits, to make progress on a game for the launch window.

It’s easy to think of the show on Wednesday as an Apple-style launch event, but I think it’s more like the first salvo in a marketing campaign. Yesterday, we saw the concept. At E3, I think we’ll see the hardware. At TGS, we’ll probably get a final price and date. I also think Sony wants to push new online services on the PS4 more than it ever has before, which is an education process, and again needs time.

There was also a lot packed into the conference, even if it was mostly a bit dull. Lots of developers, lots of spec talk (despite some claims to the contrary) and lots of concepts. But think about what’s yet to come: Naughty Dog, for example, were totally absent – I think we’ll see a next-gen Uncharted reveal after Last of Us ships. Insomniac games might’ve gone multi-platform, but I bet we’re due a next-gen announcement from them. What about Rockstar and GTA V? I’d put money on a beefed-up port. Announcing the console this early gives time to space these announcements out, and maybe even produce playable demos closer to release, to take the excitement to the next level.

In the tech world, eight months is a long time. But without a throng of constantly updating competitors breathing down its neck, and with the confidence that its platform is solid, I think Sony has the freedom to use this time effectively.

Lastly, there's the issue of price. Nintendo has already had issues with a strong Yen causing pricing problems for the Wii U and 3DS, with the exchange rate cutting into the amount of profit it makes from overseas. With the world economy the way it is, I think Sony would be mad to announce global prices this week that it hopes will still hold up at the end of the year.

--------

Leaving that aside, here are some random thoughts on the event:

I’m pretty positive about the specs, including the fact that they’re so close to PC specs, which has earned scorn in some quarters. Development costs are so high, now, that if won’t make thing easier for developers, publishers will look even more to boring, homogenised franchises instead of exciting new ideas.

I think Sony made a mistake not showing off the hardware. It’s missed out on the chance for hundreds of thousands of news stories to use a picture of it to illustrate their stories. There’s no icon, no object of desire for people to focus on. The design could have been instilled as the vanguard of the next generation, but no.

It feels like Sony is going to push the idea of the PS4 as a gaming ‘hub’, with streaming mobile gaming spreading out from it. I think this can be a strong value proposition, but is going to be total arse on most home networks. The Wii U controller uses Wi-Fi to stream video, but its own private, direct, wireless-N connection. Without that kind of commitment to the cause, remote play may relegated to just another feature instead of a key selling point.

The 8GB of GDDR5 RAM is great. Massive bandwidth, loads of space. I don’t know how much might be used some of the video or background features, but we should be looking at beautiful, near-photorealistic textures.

Why Zelda is best as a cartoon

There's always been a lot of excitement about the idea of a graphically rich, mature Zelda game. Both the Gamecube and Wii U used a Zelda tech demo to show off their capabilities, but what the Gamecube eventually got wasn't a realistic, dark Zelda game. It was a light, fun, cel-shaded adventure on the high seas. And it annoyed a lot of people – all of whom had missed the point.

A lot of Shigeru Miyamoto's games are his way of distilling an experience to its most enjoyable. The story of Zelda's inception is that in his youth he explored the woods near his Kyoto home and stumbled across caves, which he would explore, presumably filled with bats that chased him and spiders that seemed much larger than they were to a child. The idea of the Zelda games is to bring this particular feeling of child-like gumption to anyone and everyone – wrapped around a narrative of saving the world, naturally.

The 3D Zelda games haven't generally stuck true to this spirit. For me personally, even Ocarina of Time isn't really a proper Zelda game. The sense of wonder in exploration is the idea of the completely unknown – to go down into a cave, fight your way through and discover something that no one has seen in hundreds of years, if ever. Ocarina of Time has a sense of exploration, but the sense of discovery is only one of your own – other people have already got to just about every part of the world, it's only you who's seeing it for the first time. This is also true of Twilight Princess, though that does at least have the Twilight realm to claim as unexplored (except that it's pretty much a mirror of the regular realm, so doesn't count).

Wind Waker, on the other hand, is the story of an entire lost world. The idea of discovering an unexplored island immediately adds to the sense of wonder and intrepid valour, but what you find on these islands also often hints at what was lost. It's clear that nearly everyone left in the world is oblivious to the kingdom you begin to explore – these discoveries are all your own.

(Skyward Sword takes a wander in the direction of discovery, but the way it restricts you to little hub areas pulled it back away from that for me.)

And in Wind Waker, you do that discovering with a comically emotive cartoon avatar, partnered with his talking dragon/lion ship. It's just the perfect tone for filtering adventure through the lens of childish exuberance. I think the heart of Zelda is not the mature fantasy epic that began with Ocarina and is now desired by a large portion of the fanbase for every game. It shouldn't be Lord of the Rings – it should be The Hobbit.

When the HD remake of Wind Waker was announced for Wii U, the console became a must-buy for me. I have played it over and over in the past, and I will again. I can imagine young Shigeru Miyamoto clambering into caves, creating stories of the dangers and rewards within, and he doesn't look like the Link of Twilight Princess or even young Link in Ocarina. He looks like cartoon Link, with a grin from ear to ear, confiding in his trusty boat companion.