This summer, I asked how many teams are doing Scrum according to the minimal definitions of the Nokia Test. 74% of Scrum Teams responding are doing what Jeff Sutherland now calls Scrum-Butt – “we’re doing Scrum, but for some reason, we can’t do all of Scrum.” According to Jeff, even Scrum-Butt companies may improve their revenue, but those who go beyond Scrum-Butt do much better financially than those who accept dysfunctions. Jeff has extended the Nokia test to identify the factors which help companies achieve this “hyper-productive” state.
Most of my clients are not yet ready to extend the envelope. They are trying to achieve the basics of good agile management and development. For these companies, the Nokia test is a good place to start, an early milestone, but not the final goal. (BTW – Alistair Cockburn’s 7 Crystal Properties also look like a good starting point, and some of his points are raised in the candidate list below).
The next question is how is software engineering doing? I want a litmus test, i.e. a short list of questions for challenging developers and their management on their engineering practices.
My question to you: What questions make a litmus test for “pretty good agile development?” My goal is to come up with ten to fifteen yes/no questions.
The Joel Test was an early example, but is now dated. There have been several attempts at more agile definitions of the test (e.g. confused of calcutta and jbrister), but these have not been validated. All of these lists contributed to the list of candidates, below.
This week, I ask for your help in picking the questions. Next week (or so), I will summarize and then conduct a survey based the questions you select.
Here is the poll: Which questions make up the Litmus Test for Pretty Good Agile Development?
BTW 1:All questions are in the form key-word:question. The keyword is there to help readability of the poll in doodle.
BTW 2: I will be at the Scrum Gathering in Stockholm and look forward to meeting as many of my readers as possible! Please let me know if you’re coming!
BTW 3: Voting closes Midnight (UTC) on October 25. So vote now!
Have I missed anything important? That’s what comments are for 😉
Thanks for your help!
Cookie | Duration | Description |
---|---|---|
cookielawinfo-checkbox-advertisement | 1 year | Set by the GDPR Cookie Consent plugin, this cookie is used to record the user consent for the cookies in the "Advertisement" category . |
cookielawinfo-checkbox-analytics | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics". |
cookielawinfo-checkbox-functional | 11 months | The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional". |
cookielawinfo-checkbox-necessary | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary". |
cookielawinfo-checkbox-others | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other. |
cookielawinfo-checkbox-performance | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance". |
viewed_cookie_policy | 11 months | The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data. |
Cookie | Duration | Description |
---|---|---|
mailchimp_landing_site | 1 month | The cookie is set by MailChimp to record which page the user first visited. |
Cookie | Duration | Description |
---|---|---|
CONSENT | 2 years | YouTube sets this cookie via embedded youtube-videos and registers anonymous statistical data. |
_ga | 2 years | The _ga cookie, installed by Google Analytics, calculates visitor, session and campaign data and also keeps track of site usage for the site's analytics report. The cookie stores information anonymously and assigns a randomly generated number to recognize unique visitors. |
_gat_gtag_UA_42152348_1 | 1 minute | Set by Google to distinguish users. |
_gcl_au | 3 months | Provided by Google Tag Manager to experiment advertisement efficiency of websites using their services. |
_gid | 1 day | Installed by Google Analytics, _gid cookie stores information on how visitors use a website, while also creating an analytics report of the website's performance. Some of the data that are collected include the number of visitors, their source, and the pages they visit anonymously. |
Cookie | Duration | Description |
---|---|---|
NID | 6 months | NID cookie, set by Google, is used for advertising purposes; to limit the number of times the user sees an ad, to mute unwanted ads, and to measure the effectiveness of ads. |
test_cookie | 15 minutes | The test_cookie is set by doubleclick.net and is used to determine if the user's browser supports cookies. |
VISITOR_INFO1_LIVE | 5 months 27 days | A cookie set by YouTube to measure bandwidth that determines whether the user gets the new or old player interface. |
YSC | session | YSC cookie is set by Youtube and is used to track the views of embedded videos on Youtube pages. |
yt-remote-connected-devices | never | YouTube sets this cookie to store the video preferences of the user using embedded YouTube video. |
yt-remote-device-id | never | YouTube sets this cookie to store the video preferences of the user using embedded YouTube video. |
Cookie | Duration | Description |
---|---|---|
COMPASS | 1 hour | No description |
cookies.js | session | No description available. |
S | 1 hour | No description available. |
4 Comments
Knowing whether my team is agile is less useful than knowing if we are providing value to our customers. But if the test is structured in such a way that the questions can help identify behavior vectors which might help us improve, then that would be OK. Given that:
In this test is “quiet environment” a good thing or a bad thing? When I’m in our team area and it’s too quiet, then I know communication isn’t going on and that’s generally not a good thing.
Does “colocated” mean cubes in the same area, or an open team work area where team members can easily move around to help each other, pair on work, and rapidly gather around the whiteboard to discuss today’s design question? Cubes in the same area is better than different buildings or cities, an open reconfigurable work area is much better yet.
Does “Can you test in a single step?” mean “Are the vast majority of your system tests automated?”? If not, what does it mean?
Testers: Is having testers a good thing? I personally believe so. I think the question should focus on how the testers and developers work together to deliver the best possible system: Daily collaboration, joint creation of test automation tools, etc.
Releases: I think twice in six months is an incredibly low bar for agility.
Wiki: While I encourage teams to use a wiki for both near-time communication and as a simple method for knowledge capture, the question seems too focussed on a technology. I’m sure there are teams who use Sharepoint or some other technology in the same manner we use a wiki.
Proposed additional questions:
Do you create an automated unit test to replicate the problem before fixing any issue discovered by testers or in production?
Are there areas of the system that only one person can work on?
I don’t like the test. I don’t think it will be valuable and not sure what goals it have behind.
Douglas,
Thanks for your feedback!
yes, a lot of questions are open for interpretation. In the original Joel test, “quiet environment” probably meant “developers have their own offices, can close the door and be left alone.” Today it would probably mean the teams have a team space where they can talk as they see fit, but which is free of ambient noise from other groups, construction, etc. And probably the team members would like access to quiet rooms, so they can close the door and concentrate when they need to.
Cheers,
Peter
Hi Anonymous,
I am sorry you feel that way. But let me explain why I am asking these questions.
I often go in to projects that are “challenged.” They are not delivering to management expectations. The project is my patient, I’m their doctor. I want to take its pulse, check its temperature, look its tongue, whatever, to come up with a first diagnosis of what the of what is ailing the project, so that I can help it (or more precisely, the team) back to “health”.
This is perhaps a first iteration. I have had some discussion about this at the Stockholm Scrum Gathering, and one very good idea, would be to come up with a list of “Smells” (perhaps symptoms would be a better name) and their causes.
Any ideas?
Cheers,
Peter