Visual Studio 2010 Architecture Tooling Guidance

Interested in Visual Studio 2010 and Architecture? Check out the
latest delivery
of the Visual Studio ALM  Rangers. Some more info on this
product can be found in
this post
from Willy, our Team Lead in this very interesting project. It was great
fun working on this piece of guidance!!

The Rangers involved with this project are: Alan
Wills
(MSFT), Bijan Javidi (MSFT), Christof Sprenger (MSFT), Clemens
Reijnen
(MVP), Clementino de Mendonca (MVP),
Edward
Bakker
(MVP), Francisco
Xavier Fagas Albarracín
(MVP), Marcel
de Vries
(MVP), Michael Lehman (MSFT), Randy
Miller
(MSFT),
Tiago
Pascoal
(MVP), Willy-Peter
Schaub
(MSFT), Suhail Dutta (MSFT),
David Trowbridge (MSFT),
Hassan
Fadili
(MVP), Mathias
Olausson
(MVP), Rob Steel (MSFT)
and Shawn Cicoria (MSFT).

Cloud computing and Application Lifecycle Management

Recently, I noticed that the in last couple of months the amount of of time that I
spend on cloud computing (Microsoft Azure in particular) is increasing quite rapidly.
I am currently involved in a few initiatives/projects around Microsoft Azure and I
suddenly realized that cloud computing has a positive impact on the way we think about
Application Lifecycle Management (ALM). When we think about cloud computing from a
project delivery or operations perspective (and not only from a technical perspective)
there are absolutely interesting advantages that cloud computing can bring us.

[For more information about the way we think about ALM and a better understanding
of the image I am using below, have a look at
this
post
]

From a project delivery perspective, a cloud project has the advantage that we don’t
have to buy the hardware and software that we need during for the development, testing
and running the application in production (because we use the compute and storage
power from the cloud). Of course we have to pay for using the Azure platform but these
costs are likely less compared to buying bare metal and licenses for the complete
lifecycle of the application. Another advantage of using a cloud platform is the time
it takes to get our environment approved and up and running which potentially decreases
the time to market of the application. If we have a look at the image below which
represents the lifecycle of an application we can see that less hardware and an environment
that doesn’t take long to get it up and running have a positive impact on the application
lifecycle of the application. From a project delivery perspective lower costs (less
hardware and licenses) for the project and a decreased time to market have a positive
impact on the complete lifecycle of the application (represented by arrow 1 in the
image below).

Further, one of the important goals of ALM (at least in our opinion) is increasing
the added value of the application for the business user. Cloud computing enables
some very interesting scenarios that potentially bring a lot of value to the business
user. For example, the unlimited scalability of the platform (at relative low costs)
makes it possible to deliver completely new (business) services for markets that couldn’t
be reached that easily in the past. Also integration between companies, networks,
applications, etc. becomes much easier with applications running in the cloud. The
new scenarios that we can deliver by using cloud computing potentially add extra business
value to the applications we are delivering and therefore have a positive impact on
the complete lifecycle of the application (represented by arrow 2 in the image below). 

value1

From an operations perspective, the fact we have less hardware and software to maintain
(back-up, monitoring, patching, etc.) also has a positive impact on the lifecycle
of the application (represented by arrow 3 in the image below).

So, besides all the technical challenges of cloud computing that I am very interested
in, I like to think of cloud computing as “just another delivery form” for our software
development projects with a very interesting positive impact on the application lifecycle
of the application that we deliver.

Layer Diagrams: Application Architecture Guide

A few days ago Microsoft patterns & practices released a (preview) set of Layer
Diagrams for Visual Studio that comply with their Application
Architecture Guide 2.0
. The diagrams are included in a simple VSIX package that
can be downloaded from
the Visual Studio Gallery. The image below gives you some idea about what to expect
from this package. The diagram also contains a link to the complete architectural
guidance on MSDN
.

layer

The layer package is a great start but there might be more opportunities for integrating
architectural guidance in Visual Studio 2010. Right now, the package contains 5 layers
diagrams so it shouldn’t be too difficult for an architect to decide which of the
layer diagrams he should choose from the toolbox. But, what if we have additional
layer diagrams or in addition to the diagrams we also have predefined architectural
inspections
and/or validations that the architect can choose from? In that case
we might need additional guidance to help the architect choose between the various
diagrams, validations and inspections. Below you can see a screenshot of a prototype
we did in this direction where we use a WPF form to ask the architect some questions.
Additional questions pop up based on the selection the architect makes (clicking yes
or no on the form) to help him decide what exactly he needs for his architecture.
In the end, based on his selections, the the environment is prepared for him.

 InterAccess-Architectural-Guidance-Project-Wizard-Screenshot

This is just a prototype and not production ready but it might give you some ideas
about how we think we should provide guidance. In the past couple of months Clemens and
I have done some work on these ideas  but we haven’t really finished it yet.
Now Visual Studio 2010 becomes close to RTM it is time to finish this so expect some
updates in this direction.

Stay tuned!

More return on development with Application Lifecycle Management

This is a translated version of an article that I wrote for Software
Release Magazine

Application Lifecycle Management

In the past ten years, the costs of IT projects dropped significantly. In addition,
the number of projects that turned out to be successful, rose. Nevertheless, only
forty percent of all IT projects succeeds. This means that it takes less time today
for a project to fail. Application Lifecycle Management (ALM) may help improve the
return on projects. An efficient deployment of ALM requires the right scope and focus.

Today, many organizations regard Application Lifecycle Management (ALM) as one of
the answers to their bad performing IT departments. With ALM, they try to get more
grip on software development by integrating, coordinating and controlling the various
phases of development. ALM guides an organization from software development until
software implementation and management. Very often, an organization will limit its
focus to optimizing the developer’s work processes and the communication between developers
and project managers. An ALM tool is rolled out and its features are used to manage
the progress of the project as well as the quality of the code. Deploying an ALM tool
in this way is a step in the right direction but in practice it is not a guarantee
for success. Without the right focus, software development will remain a stand alone
activity without any relation to other parts of the organization, including business
and operations. Additionally, research shows that companies spend on average 30 percent
of their available IT budget on newly built applications. This means that they neglect
an area of 70 percent, where optimization is also possible.

More return by shift in focus

With the right scope and focus ALM has much to offer. Figure 1 shows a schematic view
of the lifecycle of an application that is to be developed. In this view de x-axis
represents the time and the y-axis the value/cost of the application. In this figure,
the extended curve shows the lifecycle of the application from development until end-of-life.
The figure provides insight in the different phases of an application’s lifecycle.
It also enables for determining the impact of ALM.

image

Figure 1. The lifecycle of an application

 

Reduce development costs

The lifecycle of an application starts in the first phase of development. As of the
start of the project, costs are made for design, programming and testing. At this
time, the application offers no value. All development costs are therefore to be regarded
as costs. Often, organizations focus on these costs when deploying ALM and optimizing
the software development process. Figure 1 shows however, that the development phase
only accounts for a small part for the full lifecycle of the application.

Time to market

The figure also shows that the application will only add value to the organization
when it has gone live. This means that the ALM activities need to be focused on getting
the application (or a part of it) live as soon as possible. One of the ways to do
this is using agile development methods, including iterative delivery. Shortening
the time to market not only results in faster added value, it can also provide competitive
advantage. In general, organizations that are first in addressing new needs or market
changes profit the most from these developments. Organizations that are trend followers
profit less; or even worse, they have to invest in order to stay in the market.  

Added Value

It is clear that an application will add value when it has gone live. Many organizations
do no recognize this added value. ALM activities should focus on getting as much added
value as possible. As the application is developed for the end-users , it is key to
involve this group as much as possible in the development process. User involvement,
support from the (executive) management, defining clear business goals and optimizing
requirements are all equally important.

One of the ways to realize this is to optimize the communication between the user
organization and IT and to create a common involvement for all stakeholders. In this
way, stakeholders are better geared to state their demands. They are also better able
to determine the consequences of their choices and to change priorities and requirements
during the project, together with the project team. Here, it is also cost-effective
to use short iterations, as changes in scope, priorities and requirements can easily
be made. In this way, an organization can address new insights during the project,
which may increase the added value of the application further.

Operational costs

When developing the application it is advisable to acknowledge in an early stage the
need for application management. By creating consensus on the requirements for the
management department, operational management costs of the application can be reduced
significantly. The focus on management is paramount in an ALM approach.

Extending the lifecycle

By adding value and reducing costs, a developed application will provide return in
the long run. By focusing on the value of an application and by constantly monitoring
this aspect, an organization is better capable of determining the moment when value
is replaced by costs. This early insight helps in deciding what to do with the application:
adjust or phase out?

Phase out

When an organization decides to abandon an application, the knowledge of the application
will lead to less abandoning costs. This is definitely the case when an organization
combines this knowledge with the optimizations, provided by ALM in the earlier development
phase of the application. An example is the right documentation on the application
interface to other systems.

image

Figure 2. The new lifecycle of an application

The new lifecycle

Figure 2 shows the new lifecycle of the application. This is the result of broadening
the ALM focus as described earlier. The green field in the figure depicts the extra
return of the application. This is possible by speeding up the go-live process, a
longer life of the application and more added value for the user. The red areas in
the figure represent the decreased development costs and the costs for abandonment.

Priorities of an organization

Figure 2 shows that the return on ALM increases when an organization not only focuses
on the development phase, but also on the other phases of an application’s lifecycle.
This optimization obviously pays off, but the question is how to relate this to the
goals of today’s organizations. Many organizations focus on cost reduction, compliancy
and risk management. How can ALM be related to these three priorities?

Cost reduction

A too strong focus on cost reduction may well lead to an imbalance in this area, resulting
in a paralyzed organization in terms of productivity. By continuously executing cost
reducing measures, tools and communication channels are lost. In the end, this can
affect the productivity of employees negatively. By combining cost savings and productivity
improvements this issue can be addressed.

A combination of ALM activities with a so-called high-performance workplace can
be the answer. The high performance workplace is a physical or virtual environment
which is especially designed for knowledge and information workers. It supports them
optimally in executing non-routine duties. In these duties, exploring, learning, innovating,
collaborating and managing are key.

The current generation of ALM tools already pays attention to optimizing communication
and collaboration between stakeholders in an IT project. This shows that the awareness
on effective collaboration is growing. It also requires the focus to be placed on
the process and the human aspect of collaboration. Important success factors are creating
joint goals and making sure there is a shared vision of the truth.

Compliancy

The current compliancy requirements demand a high level of control. In recent years,
this control is expressed in continuous process optimization. The need for a process
approach led to reduction of flexibility within an organization. It is becoming increasingly
difficult to address market changes and needs without losing control. The focus on
processes, tools and management also led to a situation in which up to twenty percent
of project costs can be allocated to the developed software. The rest of the costs
are related to project support and meeting internal and external requirements. The
new generation ALM tools and corresponding methods, enable an organization to meet
these strict compliancy requirements without affecting flexibility. These tools provide
the necessary mechanisms and means of control to link requirements, quality metrics
and the solution. This makes it much easier to prove that the solution meets all requirements.
The full support of agile methods within the ALM tools ensures the required flexibility.

Risk management

Experience from the past shows that risk is inherent to projects. However, the higher
the risk, the higher the return, as optimists say. It is not necessary for organizations
to exclude all risks. They need to assess risks continuously on the return they can
provide. The collaboration in a project, the commitment from stakeholders and the
combination of business and ICT knowledge enable the right assessment of risks. This
may lead to a situation in which a risk that is regarded as unacceptable by individual
members, is controllable or even desirable.

Conclusion

ALM is gaining in popularity. Many organizations take their first steps in this area
and start to purchase ALM tools. Seemingly without thinking, they focus on the development
phase. This is an excellent first step but still they should not stop here. By broadening
their focus and incorporating the full lifecycle of an application in their approach,
they are able to increase their return on ALM significantly. The broader approach
offers more insight in the added value of the application. By combining ALM and a
high performance workplace, and by putting the human aspect first, it is possible
tot create an environment in which collaboration is optimized. The result is a software
development process with predictable results and sufficient flexibility to contribute
to the three main priorities of an organization: cost reduction, compliancy and risk
management.

WSCF.blue Beta 1 is out!

I Just wanted to let you know that we have just released the Beta
1 of WSCF.blue
. This great tool supports a Contract First Approach for developing
webservices in Visual Studio 2008. Some time ago, I took the project lead (together
with Christian) for this great
tool. Unfortunately, I have been kind of busy lately so we didn’t made a lot of progress
in the last couple of months. Just recently a couple of  new members joined our
team which resulted in this Beta 1 release. In this release we added MSI support which
was one of the key requested features for this tool. For some more info on this release
I suggest to have a look at this
post
from Benjamin (one of
our new team mebers!). Our new team members inspired us a lot and brought some great
ideas so expect some cool new features soon! Let us know what you think of this release
on our forum.

Thanks Buddhike, Benjamin and
Alex Meyer-Gleaves for getting this new release out!!

Architectural Inspections: Implemented in Visual Studio Team Architect 2010

Currently, Clemens and I are writing a
whitepaper about Architecture, Application
Architecture Guide 2.0
and Visual
Studio Team Architect 2010
. (VSTA). In addition to this paper we are also working
on some ‘tooling’ that we plan to deliver with the paper. Since we are not done with
the paper and tooling yet and this blog becomes a bit too quite I decided to start
sharing some of our thoughts and work in this space on this blog.

One of the topics in the paper is, what we call, ‘Architectural Inspections’. Without
going into too much details just yet we can think of an Architectural Inspection as
a ‘check’ to help us verify the correctness of (parts of) an application architecture.
The concept isn’t totally new, in fact the Application Architecture Guide 2.0 comes
with an organized checklist that
sums up important inspections that an architect can use during the design and/or validation
phase of an architecture. Although a checklist is a great start, we think that a standalone
checklist doesn’t get the most out of these so called Architectural Inspections. In
our opinion it will be much more powerful if we can include these inspections in our
Application Lifecycle Management practice, integrate them in the Visual Studio IDE
and provide the right guidance at the right moment!

To validate our thinking, we collected all the inspections in the Applications
Architecture Guide 2.0
checklists and stored them in an XML format. In fact, we
used the Team Foundation Server 2010 (TFS) Work Item Type XML format which enables
us to easily upload our Architectural Inspections into TFS as work items. In addition
to the ‘core’ Architectural Inspection data, like title,  status, description
(where we explain what we need to validate and can add additional guidance) we added
some meta data to categorize our Architectural Inspections and make it possible to
do some grouping. For example, we can categorize our Architectural Inspections per
‘Cross Cutting Concern’ (Logging, Validation) or ‘Layer’ (Service Contract, Business
Logic, etc.) or ArchType (Mobile, Rich Client, Service, etc.), or whatever we think
makes sense. In addition we have build a little tool that lets us upload these Architectural
Inspections into TFS as work items. Currently we store our Architectural Inspections
as normal ‘Task’ work items and abuse some ‘hidden’ fields to store the meta data
that we need. However, we already realized that we are better of defining our own
work item type for our Architectural Inspections. So, this is probably the next thing
on my ToDo list…

Below you can see a screenshot of (a very basic prototype of) the tool that we are
using to upload our Architectural Inspections into TFS. As you can see we haven’t
spend too much time on the User Interface yet and the data in the screenshot is just
dummy data that doesn’t make too much sense.

 Injector

However, the most important thing right now is that by using a tool like this we (as
an architect designing an architecture) can easily decide which Architectural Inspections
make sense for the architecture we are designing and add only those inspections into
our Application Lifecyle. This means we can, for example, add only those inspections
that apply to the layers or cross cutting concerns that our architecture requires.
(In a future post we will demonstrate how we can even relate the inspections to layers
in our Layering Diagram.)

Another thing that we think is important is to have a clear overview of all the inspections
that are considered and/or executed during the design and/or implementation of the
architecture of the application. Knowing that the guidance and best practices of a
particular inspection wasn’t properly implemented or worse totally neglected is important
information and (potentially) tells us something about the quality of the application.
Of course sometimes it makes perfectly sense not to spend time on cross cutting concern
X. However, at a later time we can’t recall the reasons for not spending effort on
them.  The fact that we now have our Architectural Inspections stored in TFS
(as work items) makes it possible to track the current status (by using the status
field (Active, Closed, Rejected?) )and provide us with valuable information about
the design decisions (captured in the description field?)  that are made during
the lifecycle of our application.

Last but not least we think that, to get Architectural Inspections fully integrated
in the Application Lifecycle, we need a proper way of visualizing them. In fact, an
overview of these inspections and their status might be good starting point for a
quality check or valuable input for our testers. The most common way for visualizing
the status of work items would obviously be to create a report in TFS. However, we
thought we better get some experience with another cool new feature of VSTA 2010 so
we decided to visualize our inspections in DGML. So, what we did is, we create a little
utility that extracts the Architectural Inspection information out of TFS and generates
a nice DGML diagram
for that. Below you can see a screenshot of how our first implementation of this looks
like. (again, we might need some UI improvements and some real data) 

dgml3

The little icons in the nodes (representing an inspection) display the status of the
inspection. At this moment the green check means the inspection has the ‘Closed’ status
in TFS and the warning sign means it has the ‘Active’ status (so nothing has been
done with it yet).

There is a lot more to tell about the things we have been working on and the thoughts
we are still having about Architectural Inspections, Application
Architecture Guide
  and VSTA 2010 extensibility. We are currently busy improving
and refactoring all of the above. In the coming period we will share some other VSTA
extensions that we are working on and if things goes as planned everything will end
up in the whitepaper and/or downloadable assets. So, stay tuned and of course we are
very interested in your opinion, concerns, etc. so leave us a message!

Making money with Application Lifecycle Management

A few days ago I was asked by one of my colleagues why I am spending a lot of my time
experimenting with Visual Studio Team System 2010 (Team Architect), Blueprints, App
Arch Guide
and Application Lifecycle Management (ALM) in general. He noticed me
‘living’ in VSTS 2010 CTP for some time now and he was wondering if it isn’t a bit
too early for this and what I did to convice to management to let me do this. My immediate
answer to this question was ‘No, it is not to early!’ and I explained that we (Inter
Access
) expect VS 2010 to help us optimizing our Application Lifecycle Management
practice. This answer was a bit too vague for my colleague and of course the next
question was how will we benefit *exactly* from investing in VSTS 2010 and ALM. Will
it make our life easier?, will it makes us better people?, will it improve quality?,
will it save us time?, will it save us money?

Exactly these same questions popup when discussing ALM with customers. Apparently
making the business case for ALM (and/or VSTS licenses) isn’t always easy. How come?

From our experiences we learned that currently most people and organizations are relating
ALM to their development activities (Software Development Lifecycle). Therefore it
is only logical that this is the area where people are trying to identify their benefits
(costs savings) from ALM. But is this correct? Is this focus too limited?  Shouldn’t
we focus on more than only development when it comes to cost savings? Especially if
we keep in mind that, on average, only 30% of the IT budget is spend on new application
development (the remainder is spend on maintenance/operations)!

How come most of us still only focus on development? Is it because we still focus
too much on the tools instead of facilitating collaboration between ‘Business’ ‘Development’
and ‘Operations’?

Everybody experienced in VSTS 2005 and/or VSTS 2008 will come to the conclusion that
these tools mainly focus on the different roles within the development team (developer,
architect, project management). Source control, unit testing and quality assurance
features of these products provide us with a professional development environment
and help us improving the overall quality of the products that we deliver. Work item
management, a centralized store, reports, portals, etc. improve the collaboration
within the development team and support project management in tracking progress, staying
in control and managing risks adequately. All of this is great and potentially boost
the performance of the development teams but experience learns that these benefits
don’t come ‘out of the box’! Installing the tools doesn’t make the development team
collaborate by default and most certainly doesn’t stimulate collaboration with the
Business and Operations!

Now we know where most of us focus on for their ALM related activities, let see how
this relates to the complete application lifecycle. For this we will use an the graph
below were the x-axis represents time and the y-axix represents value and negative
value displayed as costs.

clip_image002

Obviously the lifecycle of the application starts with its development. During this
phase we have to make costs to design, develop and test the application. At that time
the application doesn’t bring us (actually the business) any value and the complete
development phase of the project only costs money. From the moment the application
(parts of it?) are installed into production the appliaction starts to generate value
till the moment it needs to phase out where it starts to cost money again.

What we see is, that most organizations are focusing on reducing the developments
costs and (sometimes) try to shorten the time to market. Btw. it doesn’t come as a
surprise that these are exactly the areas where the current releases of Visual Studio
Team System focus on.

clip_image004

Reducing costs and make the application add value earlier is good but if we have a
look at the image above we can see that the application lifecycle doesn’t end at the
moment the application goes into production (where lifecycle line crosses x axis).
So, wouldn’t it be great if our ALM practices help us optimize (reduce costs and/or
increase value) during the remainder of the application lifecycle also?

For example, one of the things we can do to increase the business value is to practice
a proper User Experience design (see this
post
of my colleague Andries for
more info on this). By taking ‘Operations’ into account during the design and development
phase of the application we can reduce operations costs during the remainder of the
lifecycle. These things combined will result in an application that is more successful
for a longer period of time (because it adds more value and costs less to maintain).
Also, because we have done a good job developing the application, we know exactly
what it does, where it is interfacing with (something VSTA 2010 will help with) and
most importantly when it stops adding value which will help reducing the ‘phase out
costs’ of the application.

Adding this to the graphical representation of our application lifecycle results in
a graph that looks like this.

clip_image006

Based on this, we can now draw our new  application lifecycle which might looks
something like this (dotted line is new lifecycle).

clip_image008

The good news is that the green area between the ‘old’ and ‘new’ lifecycle is the
area were we can make money by adding extra value. The red colored areas is the place
where we can make money by reducing costs. Doesn’t that look great???

Please note that ‘Reduce operations costs’ might be misunderstood from this graph.
We don’t mean less value but less costs. I didn’t know how to display this correctly
:-)

Of course, all of these things don’t come by itself. We have to actually work for
that to make that happen and we can’t do everything at once. In this post I am not
going to detail all the steps that we can do to make that happen and where we can
use the current or future tooling for. However, hopefully this last image makes it
very clear that there are others areas, besides development, within the application
lifecycle where we can either reduce costs or increase value. So, if anybody aks you
why they should invest in ALM this image should give you a starting point for your
discussion…

At least, it *did* help me explain why I should spend my time on ALM and experimenting
with VSTS 2010, Blueprints and App
Arch Guide
:-)

 

 

Blueprints: Visual Studio 2010 (2)

In an earlier
post
we mentioned that it is relatively easy to get the current
Blueprints bits
running on the Visual
Studio 2010 CTP
by modifying the .MSI in Orcas. At that time we forgot to mention
that we need a few extra steps to really get things going with Blueprints in Visual
Studio 2010.

When trying to build a Blueprint solution in Visual Studio 2010 we will notice the
following error in the error window.

Error

As we can see, the build task ‘BASM’ is failing to retrieve the correct path. This
task is implemented in the ‘Microsoft.SoftwareFactories.Blueprints.Builds.Tasks.dll
that can be found in ‘..\Program Files\MSBuild\Microsoft\Blueprints\2.0’.
It turns out that the execute method of this tasks looks for a (hardcoded) ‘String
Value’ called ‘Blueprints’ under the ‘HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\VisualStudio\9.0\MSBuild\SafeImports’
tree. Because we replaced all ‘9.0’ in ‘10.0’ in the Blueprints .MSI to get it to
install on Visual Studio 2010 this value doesn’t exist under ‘9.0
anymore (but does under ‘10.0’).

To fix this we can either make sure to skip this particular replacement when modifying
the .MSI in Orcas or manually add the Blueprints ‘String Value’ under the ‘HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\VisualStudio\9.0\MSBuild\SafeImports
and give it the value ‘C:\Program Files\MSBuild\Microsoft\Blueprints\2.0\Microsoft.SoftwareFactories.Blueprints.targets’.

Another issue occurs when debugging our Blueprint in Visual Studio 2010. Currently,
there is no property page implemented for the Blueprint project type (.bpproj) and
therefore starting up Visual Studio 2008 is hardcoded in the Blueprints
core. To get around this we can add an empty C# ‘Class Library’ project to our solution,
set this project as the ‘StartUp’ project and make this project startup Visual Studio
2010 (property page) when debugging. Although this solution does work it makes the
Visual Studio instances in my Virtual PC image VERY slow (don’t know why). Another
option, that does work for me, is to leave the Blueprint project as the ‘StartUp’
project, let it start up a Visual Studio 2008 instance, (and simple ignore it) manually
start another Visual Studio 2010 instance and attach this instance to the debugging
process of the Visual Studio instance we started the debug session in.

Now everything is in place to *really* start developing Blueprints for Visual Studio
2010 CTP!