I’ve been using Android mobile phones since Eclair (2.1). While the operating system visibly improved at each iteration, updates were, and still are, a major pain point.
Users are also missing out on performance and battery improvements that appear in new Android releases. Most flagship phones released in 2017 had Android 7.1 at a time when 7.1.2 was readily available, a six month old release. Lots of budget phones sold in 2017 come out with Android 6.0, a version that’s more than 18 months old at the time of writing.
Most devices are sold on carrier contracts. Chances are that updates have to go through several gatekeepers. The first one is Google, then hardware vendors, then phone makers and finally the carrier itself. Each step requires validation, testing, then more validation, and then more testing.
There’s simply no economic incentive for anyone in this chain (except for Google) to support a phone after it has been sold. This means that major updates get hugely delayed 1. More often than not, this also means that devices are not getting quick security updates, which is worse. The latest round of security vulnerabilities means that receiving the wrong SMS/MMS or being near the wrong Wi-Fi or Bluetooth radio could result in a compromised device. People have no control nor defense against this, besides patching security holes.
In this race to the bottom some manufacturers clearly state that it costs too much to push security updates on a monthly basis. They prefer to roll them up with major platform updates, but this means that they are leaving their users unprotected for long stretches of time.
Quoting an article from Ars Technica:
Motorola understands that keeping phones up to date with Android security patches is important to our customers. We strive to push security patches as quickly as possible. However, because of the amount of testing and approvals that are necessary to deploy them, it’s difficult to do this on a monthly basis for all our devices. It is often most efficient for us to bundle security updates in a scheduled Maintenance Release (MR) or OS upgrade.
The problem, however, is: how do you sell security to users? How do you get them to vote with their money, the only weapon they have to force vendors to change their attitude? Most people don’t want to spend 800€ on an iPhone 2, which is probably the most secure consumer mobile phone available today. The layperson doesn’t really have a concept of “software updates” and “security”. Even fellow software engineers don’t seem to grasp or fully appreciate the concept! Most people just want a cheap smartphone to use SnapChat and WhatsApp, they probably won’t demand security unless they get hacked en-masse and experience direct economic damage from hacking attempts.
Which means that the burden of keeping users secure lies entirely on Google’s shoulders.
With this change, vendors will only need to customize the hardware support layer, assuming the HAL is well defined and well separated from the rest of the system. This is a nice improvement since, before Treble, hardware customization usually needed changes spread across the entire operating system.
However, I’m not confident that this will solve anything at all. With this model, SoC and phone manufacturers are still responsible for shipping updates and, as we said before, there’s just no economic incentive for them to update phones. These companies live on razor-thin margins and any cent saved can be spent surviving in this highly-competitive market.
Even on the high-end (i.e. phones that cost more than 600€) things aren’t that rosier. While manufacturers may have an easier time updating the low level guts of the operating system let’s not forget that most players in this space still rely on heavily skinned Android versions such as TouchWiz, Sense, EMUI, etc. Skins slow down updates as much as adding hardware support and Google doesn’t have an answer to make that easier, yet.
I strongly believe that the situation won’t improve at all until Google either:
- Starts updating the operating system by itself, bypassing manufacturers and carriers, and letting them only manage hardware integration and the baseband processor.
- Starts exerting more control over partners (especially phone manufacturers) using access to Google Play Services as leverage against vendors who don’t, for example, promise to push monthly security updates to users for at least three years since a phone’s release date. Those failing to do so, should see access to the Play Store and proprietary Google apps revoked.
I don’t think Google will choose the former, since it goes directly against the very reason that made Android popular in the first place, and would risk Google’s relationship with hardware partners (e.g. Samsung) and carriers.
I am more confident that Google will try the latter. We’ll see how this one pans out.
Which is, arguably, a minor loss if you are not interested in features or efficiency gains. ↩
Since Google killed Nexus phones (which were affordable at around 250-300€ for a base model), the only alternative is a Pixel, which often costs more than an iPhone and is supported for only three years. Apple devices are usually supported for at least five years. By choosing a Pixel you are getting less value than an iPhone. By choosing anything else you are getting even less. ↩
I’ve been dipping my toes in Rust lately and I’m finding it a competent programming language. It’s a nice middle ground between C/C++ and a programming language that requires you to know Category Theory in order to start using it (cough… Haskell cough…).
When you start using Rust you quickly find out that it’s a good idea to something called Clippy. It is a fantastic tool that helps you write better and more idiomatic Rust code. I’m sure its name is a reference to this guy on the right, just a little less obnoxious.
Having Clippy on your side is like having an experienced Rust developer telling you that the code you are writing is of questionable quality, even though it compiles just fine.
There’s one catch though, you can’t use it on stable Rust releases. 1
Clippy, in fact, is implemented as a compiler plugin and, as such, it depends on unstable
APIs that are available only on nightly releases of the Rust toolchain.
I don’t like to run nightly or beta releases as my daily driver for a couple of reasons:
- I don’t want to risk depending on features that will ever be available only on nightly Rust or that will change wildly between snapshots. 2
- Being a beginner, I want to judge Rust on the merit of what’s available in stable releases right now, not on the prospect of what will be available later, if at all. In a project I decided to use serde’s codegen before “macros 1.1” were stabilized in Rust 1.15.
Sometimes Clippy fails to build even with the latest nightly compiler, so the first thing I usually do is to browse its CHANGELOG file and find out which release has been compiled with which compiler and use that.
For example, given this excerpt:
0.0.124 — 2017-04-16
- Update to rustc 1.18.0-nightly (d5cf1cb64 2017-04-15)
0.0.123 — 2017-04-07
- Fix various false positives
I would pick Clippy version 0.0.124 and build it with the 2017-04-23 nightly compiler.
Starting with a working
rustup I would then run:
rustup toolchain add nightly-2017-04-15 rustup run nightly-2017-04-15 cargo install --force --vers 0.0.124
If the selected version still fails to compile, I just pick the previous one until I find one that works.
Since I always keep the stable toolchain as default, running
cargo clippy as-is will result in an error:
0 19:40:53 lvillani@oculus ~/D/borg-hive (master=) $ cargo clippy dyld: Library not loaded: @rpath/librustc_driver-8dacd42830809d58.dylib Referenced from: /Users/lvillani/.cargo/bin/cargo-clippy Reason: image not found error: An unknown error occurred To learn more, run the command again with --verbose.
Since we have
rustup, running Clippy with the nightly toolchain we installed before is easy:
rustup run nightly-2017-04-15 cargo clippy
If, for some reason, running
cargo build with the stable toolchain after Clippy ends up
recompiling all dependencies, just tell Cargo to put its output files in a separate directory like
env CARGO_TARGET_DIR=./target/clippy rustup run nightly-2017-04-15 cargo clippy
This is especially useful if you then want to remove only the output files generated by a Clippy run.
Most nightly features are behind a feature-gate, which means that you won’t accidentally use them. sometimes, though, rustc may change behavior without you noticing. For example, struct field reordering has been recently enabled, breaking programs that relied on previous behavior (I’m just making an example, excluding the fact that Rust doesn’t specify an ABI and people shouldn’t rely on the compiler’s behavior in this case). I prefer to know about behavior changes by reading release notes published with each new stable release instead of having to wade through commit logs. ↩
Today I learned that, under certain circumstances, Docker and an IPsec VPN can conspire to make your life as a developer miserable, by eating outgoing HTTPS connections started from inside a container.
The first symptom that something is amiss is usually being unable to go past the “TLS Client Hello”
message during the handshake process, or having the connection stall shortly after that. For
curl from inside a container would just hang, even though it would work just fine
on the host machine itself.
The scenario is the following: I have a standard Ubuntu 16.04 machine with Docker and other tools coming straight from the official repository, quite boring. An L2TP over IPsec VPN connects me to the remote site with a split-tunneling configuration.
Said VPN is configured client-side with StrongSwan and
xl2tpd, two of the most evil pieces of
software. Especially the latter, which will often crash unless planets are aligned correctly as the
author wanted. At the other end of the VPN is a Meraki box that shuts itself down if you just so
happen to sneeze around it.
All network interfaces have an MTU of 1500, except for the L2TP tunnel that sits around 1400 since
x2ltpd/pppd duo configures the
ppp0 interface like that, for whatever reason.
Here’s what an imaginary packet would encounter if it had to travel from inside a container to a machine at the other end of the VPN tunnel (in reality it’s more complicated than that so, please, bear with me):
It appears that the issue stems from Docker’s use a bridge interface and the fact that Linux won’t generate the “Fragmentation Needed” ICMP message that would allow for Path MTU Discovery (PMTUD) to work when IP packets have the “Don’t Fragment” bit set (which should be typical for TCP streams). Now, I’m no network engineer so take my layman’s explanation with a grain of salt.
In my case the fix was simple: start the Docker daemon passing the
--mtu=1400 parameter. On Ubuntu
I only had to edit the value of the
DOCKER_OPTS variable present in
systemctl restart docker.