Platform evolution in the mobile age
The notion of a platform has evolved dramatically since the popularity of networked computing in the 1990s. What was once a set of low-level services for discrete applications and experiences is now a high-level service and managing infrastructure for applications and their interactions. Where the platform and the applications running within it were a monolithic entity, the platform is now a service provided to the user which promises consistency and stability. Where applications once could interact with each other freely (sometimes with unexpected outcomes), modern platforms now tightly control how these applications interact with the system, the user, and each other. iOS 1.0 in 2007 marked the beginning of this new era, though there are certainly many examples of earlier platforms with similarities. For better or worse, this model has been emulated across the industry to varying degrees. As with any generational change, there are definite pros and cons with each model.
In many ways, this change was inevitable and driven by the commoditization of software, the transition from “application” to “app”—from an expensive, powerful piece of software to the bite-sized counterpart, usually leveraging web services to do heavy lifting or interact with other devices and software. Where desktop computers in the 90’s would have only several expensive software suites, our mobile devices today have hundreds of apps, many free or very low cost. Software that was once purchased in a box at a brick-and-mortar store and installed in minutes or hours, is now purchased (or not) over the internet, downloaded wirelessly, and installed in seconds. The cost of app acquisition is low and the number available is vast. Of course this leads to devices with many different applications, with varying purpose, interacting with the system, each other, and the user.
The trust model has also changed due to the globalization and decentralization of the industry. Anything that could once be purchased in a brick-and-mortar store would have a big company name behind it, with an implicit assurance of accountability. Today, any independent developer located anywhere around the globe can build an app and submit it to a store, where users can download it instantly. There is a still chain of accountability, but it isn’t nearly as strong as the old model.
The cost to develop software used to be astronomical, requiring thousands of dollars of hardware, books, potentially training, a high level of technical proficiency, and a very in-depth of understanding of the operating system. Modern programming languages and frameworks have removed much of the complexity in working with the system (even if the capabilities remain roughly as powerful), and modern hardware is inexpensive. With a search engine and the open web, training is free or unnecessary. The barrier to entry is substantially reduced along a number of dimensions; a dramatic reduction in cost to design, implement, and distribute software has occurred with the advent of centrally managed online stores (usually but not always by the platform vendor). There are also more developers than ever, producing more applications or varying quality and relative completeness. Due to this aspect, there may often exist many duplicate options and it isn’t immediately clear which one is best, or the best fit for a given user. This leads to the situation where the user will install many different options, removing (or not removing) all but the one deemed best. Platforms now more than ever must to be robust to frequent app installation and uninstallation operations, not leaving any extra files or registrations behind.
But with these new platform paradigms and restrictions, how can developers continue to innovate in novel ways? Sometimes the new paradigms can be beneficial to building new experiences and helping users complete tasks faster than ever. But sometimes, the killer app can’t be written because to do so would require the developer to violate the inherent laws programmed in by the platform vendor. Some examples of modern laws (commonly referred to as sandboxing) can be:
- The app shall not interact with other apps on the system except through explicitly defined contracts (like protocol associations, so http:// can invoke Safari with a URL provided by another application)
- The app shall not execute code when the app is not in the foreground.
- The app shall not access personal data without consent from the user.
iOS 1.0 quickly gave rise to the concept of jailbreaking, which is modifying the software on the device to allow the user to violate the laws defined by the platform vendor. Geeks rejoiced as a powerful platform could again be used freely and without respect for the inherent laws defined by the vendor. But to jailbreak is also to remove some of the value proposition which led to the vendor to implement the laws in the first place. Without the vendor laws, apps can drain the battery or access data on the phone which the user hasn’t explicitly allowed (contacts, calendar, text messages, other personal information, etc.) Is it necessary that one (be it a user or a platform vendor) must decide between novel innovation on top of a flexible platform or novel innovation baked only into the platform?
With these modern platforms comes a realization that innovation is centralized by the platform vendor. Both updates to the platform itself and the laws are delivered explicitly by the vendor with no actual feedback from individual developers who build on top of the platform. Due to this, the vendor is now the single entity which needs to decide whether or not a feature or capability is “worthwhile.” This is done effectively inside a vacuum where ideas are considered worth addressing or not. It’s true that major platform vendors do have some of the most talented engineers and thinkers of our time, but the harsh reality is that these people will not have the perspective to understand the full breadth of possible innovation. Decisions are made that intentionally or unintentionally disallow certain kinds of experiences to be created. There are of course always good reasons for these decisions, be them technical or otherwise, but fundamentally it restricts the ability of developers to innovate on top of the platform.
Ultimately it is unclear what impact these new platforms will have on general-purpose computing. While actual “general purpose” machines are moving to the wayside in many consumer scenarios, the tasks which users complete are still diverse and likely will remain diverse moving forward. Successful closed systems have existed and will continue to do so—for example game consoles—but these systems are often designed for unitasking and the inability to innovate cross-task is less critical to the user. On a gaming console the game itself has full authority to draw to the entire screen and consume hardware resources as needed, so innovation in many relevant ways is still possible. True, there could still be room for innovation to occur with games running resident services and perhaps making changes to the dashboard/launcher and beyond. Fundamentally if given access to resources, creative developers will leverage the capabilities in interesting ways.
I don’t condone those who don’t want to move forward, but I do feel that the restricted future is a potential centralization of authority that could lead to stagnation on popular platforms. Over the past decade we’ve seen the decentralization of software across the industry benefit users and developers alike, but the pendulum is seemingly swinging back toward tightly controlled platforms. It’s important that we find balance and enable innovation both within and on top of platforms.