How We Measure Standards (and why it’s sort of a problem)
Bel esempio di quanto profondamente politica sia la tecnologia e quanto sia fondamentale una profonda comprensione della stessa per poterla dirigere o regolamentare. Programmare è sempre un atto politico, ma qui non si parla neppure di feature, piattaforme o architettura del software... ma di test. Test. https://medium.com/@thejohnjansen/how-we-measure-standards-and-why-its-sort-... I’ve been working on the web platform since 2010. I moved over to the Internet Explorer team from SharePoint Designer to help with IE9, and I’ve been here ever since. At that time, Chrome had only barely shipped. Chrome version 4.0 (still based on webkit at the time) and Firefox version 3.6 (running Gecko) both released about the same day I started on IE. But to be clear: we were not much concerned with Chrome. It was certainly cute and seemed fast, but the real competition was Firefox. Mozilla strongly argued that they were the “Standard-bearer” of Standards (competing directly with Opera for that role), and we decided to compete against them in the standards arena. [...] In order to accomplish a standards mode, Microsoft engineers wrote a lot of tests. A lot. And we made some of them public by donating them to the W3C. [...] It turns out that it really does require experts in the code to write good tests of that code. [...] Well, as stated above: writing tests requires expertise. It is also very time-consuming. It is therefore very expensive. For all new features coming through the W3C, Microsoft was submitting tests that showed we were passing nearly 100% in Standards, and also that Firefox was passing only 90% or so (interestingly, and in hind-sight a really really important fact: Chrome was also at about the 100% mark on new features, and when I would try to demonstrate they were not following standards it was very difficult to do). So most of the tests that were publicly consumed were written by Microsoft and they were mostly concerned with only new features. [...] What I learned from all of this is that he who controls the tests begins to control the perception. And our game theory was correct: Firefox had to dedicate engineers to fix public test failures. They had to dedicate engineers to write tests so that Microsoft would not control the entire narrative. [..] However [...] While we were focused on Mozilla and their limited engineering resources, Google was choosing to throw as many bodies as necessary at the problem. And they did a really (really, (really)) good job [...] In 2017, Chrome enabled two-way sync between the Chromium source and Web Platform Tests. [..] And that is a good thing. (But) New features in Chromium must include tests. If those tests fail, the merge will likely be rejected. Those tests then sync automatically with the Web Platform Tests. Adding any additional tests that Chromium might fail requires expertise from another feature expert. [...] And so it falls again to Mozilla. [...] You can see from the current dashboard (wpt.fyi, written primarily by Google), that Chrome passes more tests than any other engine. And they will continue to submit tests for new features that will also pass 100% of the tests. And the only way that Mozilla will change that is if they submit tests at the same rate as Chrome. You may be thinking, “Whoa John, hold on. Firefox also has two-way sync so aren’t they just doing the same thing?” Well, they could, except they have to spend so much time fixing bugs as seen on wpt.fyi, they really cannot play game theory here. [..] The industry measures standards compliance via test suites that are not necessarily testing things we care about as web developers in ways that they may never be used by real code and do not show a complete picture of implementation status.
participants (1)
-
Giacomo