OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

soa-rm message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [soa-rm] SOA-RM Jan 3: delayed but something to think about


Thanks, Michael.  I’m still reading and thinking.

------------------------------------------------------------------------------
Dr. Kenneth Laskey
MITRE Corporation, M/S H330          phone: 703-983-7934
7515 Colshire Drive                           fax: 703-983-7996
McLean VA 22102-7508

On Jan 17, 2018, at 1:00 PM, Mike Poulin <mpoulin@usa.com> wrote:

Hi Folks,
 let me share my humble experience.
 
I do not remember if I ever worked in Waterfall. Usually, it was RUP (see IBM Rational method). For the last few years, I worked in Agile/Scrum, but as an architect.

I worked in big companies and Programme/Enterprise level tasks/solutions required, with no doubts, collecting and understanding requirements (at certain level) up-front. Moreover, solutions usually had been accompanied by estimate of cost and resources. At this level DevOps are practically invisible. The work on estimate needs to set a Tolerance Level (TL) - a level of probability that the estimate is incorrect. For example, if TL=25%, the estimate and, therefore, solutions, the requirements, should be very accurate, which takes much longer time to collect and process up-front, even before the project starts. If TL=75%, the estimate may be low the real cost significantly, but this estimate takes much sorter time while giving a certain understanding to architects and managers about potential cost and resources.

In hierarchical organisations, nobody has a replacement for the activities described above. It is a regular practice that proper understanding and solutioning of requirements needs a few interactions and revisions caused by internal interdependencies that are usually hidden in the articulation of business statements. Sometimes (frequently), the final solutions appears as a "third cousin" to the initial one.

The main conclusion from this observation is that before we write the first epic and the first story we have to be aware/know about the last ones though not written yet. Plus, we have to have ideas (experience?) about all stories because otherwise our schedule of Sprints, their topics and duration will be way too wrong.

The second conclusion is that Agile “by the book” can work for relatively small isolated projects/tasks, which we can find in the User Interfaces rather than in a middle- or back-layers of systems. In the small isolated projects/tasks, we can rely on the competence of developers and their ability to understand the task and realise it. However, the GDPR and PSD2 regulations (that were ignored initially by developers as usually), now require a close architectural/managerial control over small tasks as well.

Thus, Agile has not brought any fundamental change in the Enterprise or Programme level development; the changes are at the small project level only. This reflects in Testing as well – if you have a more than one Agile team working on the business task, your have not only coordinate and development between teams, answer their questions about cross-cutting issues, but also provide for the integration testing across the teams. The more teams, the more testing after the team have completed their tasks. DevOps in such case automate deployment in the Testing Environment instead of Production one.

An idea of Continuous Documentation is realised already via Jira/Confluence tool-set. It is filled during each Sprint and by the end of the work, fully represents the process and results of the Agile development. An idea of continuous requirements are realised also via layered abstraction of requirements – coarse-grained first, fine-grained later.

About refactoring – this is good idea, but again, for small isolated projects. If the project includes 5 -7 Sprints, each with its own Story, at the 4th Sprint it is usually necessary to re-factor the outcome of the 1st Sprint. The more Sprints to go, the more refactoring, which is not really planned up-front, is accumulated and, finally, killed the later Sprints. Talking about Microservices, the Testing Environment for them must include all engaged MS along the all invocation line. While external MS are used, the full attention should be on ‘rainy days’ scenario for the external MS. Also, testing of the MS invocation must include full implementations of the MS, not only their interfaces – this is my  lesson of 20 years of working with Services.

- Michael

 
Sent: Tuesday, January 02, 2018 at 11:22 PM
From: "Ken Laskey" <klaskey@mitre.org>
To: soa-rm@lists.oasis-open.org
Subject: [soa-rm] SOA-RM Jan 3: delayed but something to think about
 
I have a meeting tomorrow morning that is scheduled to go to noon, so I suggest we have an abbreviated meeting from noon to 1300.  Does this work for people?
 
Please review the minutes from 6 Dec 2017 because I seem to recall some of us were supposed to augment what Rex captured.  I sent in some edits.  Check if you want to make some contributions.
 
Finally, some thoughts to consider. I submit the following as a thought piece on something I’ve been kicking around.  It may have value or be a rathole.  I’ll leave that up to you to decide.
 
There are a couple weak spots that always show up in discussions of Agile, mostly brought up by people with Waterfall experience who are not convinced Agile should take over the world. The issues are really carryovers where the Agile approaches don’t adequately address a known Waterfall need.  The areas of interest here are requirements at the front end and documentation as we approach the back end.
 
From a Waterfall perspective, requirements are the starting point of what the system needs to accomplish.  Traditional problems are (1) how well do we know at the start what the system is supposed to do, (2) do we really know the system in the detail to which the requirements (we believe) need to be specified, and (3) how likely are we to keep the official requirements up to date as we learn more about the system and how our vision of the system really meets the user needs.
 
For our second issue, in Waterfall, documentation is often done at the project end when staff is exhausted and lack patience.  Depending on how the project went, there may not be adequate resources left to create quality documentation.  If documentation is to be delivered throughout the project, then we run into the same problem as with requirements where we need attention and resources for updates to capture changes or learning along the way.  The Agile emphasis on avoiding documentation that will never be updated and never be read is often interpreted as an excuse to avoid documentation altogether rather than improving the documentation as a whole (where sometimes less is an improvement).
 
Agile and devOps approaches emphasize “continuous”, whether that be continuous testing, continuous integration, or continuous …  DevOps also emphasizes automation to consistently and efficiently perform the enabling processes.  We can look at “Continuous Requirements” as the collection of user stories/features/epics, but there are numerous complaints that user stories can become a collection of special cases, missing the common thread that would be revealed through the traditional requirements analysis.  Agile development counts on refactoring as a way to address this, but refactoring can be time consuming and still leaves us with how do we collect the stories (and eventually the refactored versions) in a useful way that provides an informative whole?  The real problem is these tasks still tend to be manual, essentially not continuous, and sometimes just not done.
 
What fundamental artifacts and enablers do we have to consistently address these issues?  My thought is this should be approachable as part of Continuous Testing.  The capability delivered by a feature is not really the user stories ticked off the backlog but what has been tested and proven to work.  The requirements satisfied are the collection of what tests prove the system can do and why we want the system to do the tested things and not a list of nice to haves.  Can we aggregate information that defines our tests to assemble our documentation?  To what extent can this be automated?  Can we create the idea of Continuous Documentation?
 
Have you seen this done (attempted) before?  What do we know about doing something like this and what do we need to find out?  How many of you have been involved with Continuous Testing?  How were tests specified?  How were tests documented?  How were the tests configuration controlled?  What do we do to spit out microservices and containers at the output end?
 
In summary, think of Continuous Requirements as the capture of our evolving thinking of what user stories/features/epics we implement and what is the rationale for our capability decisions?  Think of Continuous Documentation as the capture of what the system can actually accomplish.  Can we do this better than we typically do?
 
Ken
------------------------------------------------------------------------------
Dr. Kenneth Laskey
MITRE Corporation, M/S H330          phone: 703-983-7934
7515 Colshire Drive                           fax: 703-983-7996
McLean VA 22102-7508



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]