Dynamically selecting one or more approvers as participants in a “Start a task process” action in Nintex workflows for Office 365

How can I dynamically select the participants/approvers in a parallel task approval process in Nintex Workflows for Office 365?

I recently created a solution where our customer wanted a task approval process to be sent to multiple users based on a user’s selection of an area of responsibility.  For example if selecting the “Finance” department the approval task may be sent to Jon Smith and Jane Doe.  However, if the “HR” department was selected then the approvals may go out to Fred Young and Joe Mandoza.  The departments and their approvers exist in a SharePoint list.  In this post I’ll describe how I solved this problem.    I hope you find it useful and that it saves you much heart ache.

Creating the Department Approvers list.

First, let’s create a SharePoint list that contains a column for the department (single line of text) and one for the approver (person).

I filled in some data so that if I select the Finance department then two approvers ,Clements, Chris and Chris Clements will be selected and added as participants in the task process.

Creating the primary list.

Next, let’s create a list to which we will attach our Nintex workflow.  This list will have one lookup column that will be linked to our Department Approvers list.  Let’s call our primary list, Quarterly Reports.

Here is what the new item form looks like.  Note the look up column displaying the available departments from Department Approvers.

Creating the “Start Approval Task Process” workflow.

Start by creating a new list workflow on the Quarterly Reports list.  Call the workflow Get Approvals.

Step 1: Query the list to get the approvers based on the selected department.

Use the query list action with the following configuration:

We are going to select the approver column from this Department Approvers list where the department column matches the selected department on the quarterly reports list. Configure the action as shown below.

Note that we are selecting the “User Name” property of the approver column.  In Office 365 this will be the person’s E-mail address.  Also notice that I created a workflow variable of the collection type, colDepartmentApprovers to store the results in.

Step 2: Transform the colDepartmentApprovers variable into a dictionary

If you were to run the workflow now and observe the contents of the colDepartmentApprovers variable you would see that its format is that of a JSON object called “Approver” which contains an array of “UserName” key value pairs.

[{"Approver":[{"UserName":"Chris.Clements@mydomain.com"},{"UserName":"cleme_c@mydomain.com"}]}]

But look closely at the leading and trailing brackets [].  This means that this is an array of JSON approver objects each with its own array of key value pairs.

We need to crack open this string to get at the E-mail addresses inside but it is going to take several steps to get there so grab a beer and chill.

Let’s add the “Get Item from Collection” action.

Configure the action as shown below:

Notice that I hard-coded the 0 index counter.  This means that I will always fetch the first item in the collection.  I realize that if a user were to create a second record for the “finance” department in the department approvers list that this workflow would not pull in the approvers listed in that row.  To handle that case we would need to loop across the colDepartmentApprovers variable. However, to keep this post concise I will only handle one row of approvers per department.

In the output section of this action I created a workflow variable called “dicDepartmentApprovers“.  This variable will hold whatever was in the first position (the 0 index) of the colDepartmentApprovers variable.

Let’s run the workflow again and check out the contents of dicDepartmentApprovers.

{"Approver":[{"UserName":"Chris.Clements@mydomain.com"},{"UserName":"cleme_c@mydomain.com"}]}

We were successful at pulling the first and only “Approver” row from the collection. The leading and trailing brackets “[]” are gone! Now we need to get at the array of key value pairs containing the E-mail addresses.

Step 3: Parse the Approver object into a dictionary

Let’s add the “Get an Item from Dictionary” action.

Configure the action as show below:

In this step I am selecting the “Approver” path from the dictionary called dicDepartmentApprovers however, I insert the resulting data back into the collection variable.  What’s going on here?  Let’s run the workflow again and see what the data in the colDepartmentApprovers variable looks like.

[{"UserName":"Chris.Clements@mydomain.com"},{"UserName":"cleme_c@mydomain.com"}]

Hopefully you see that we are again dealing with an array/collection of data.  Selecting the “Approver” value from the dictionary in yields a collection of “UserName” values.  This time we are going to loop across this collection and crack out the individual E-mails.

Step 5: Looping across the colDepartmentApprovers collection

Add the “For Each” action.

Configure the action as shown below:

I specified our collection variable, colDepartmentApprovers, as the input collection across which we are going to loop.  For the output variable I created a new workflow variable called “dicUserName“.  This variable holds the currently enumerated item in the collection.  In other words, it is going to hold a single instance of the “Username” key value pair each time through the loop.  Its value will look something like this on the first trip through the loop: {"UserName":"Chris.Clements@mydomain.com"}  and then it will look like this: {"UserName":"cleme_c@mydomain.com"} during it’s second loop.

Given that the value of the dicUserName is dictionary we can use the “Get an Item from Dictionary” and specify the “UserName” path to arrive at the E-mail address.  Let’s take a look.

Add the “Get and Item from Dictionary” action. Be sure to place this action inside of the for each looping structure.

Configure the action as shown below:

Again we use the Item name or Path setting to retrieve the value.  In this case we set “UserName” and save the results into a new workflow variable called txtApproverEmail.  Its contents look like this:

Chris.Clements@mydomain.com

VICTORY!  Well, almost. Remember we want to our approval task to go out to both E-mail addresses at once.  We are going to need to concatenate our E-mail addresses into a format that is acceptable to the “Start a task process” action.

Let’s add the “Build String” action.

Configure the Action as shown below:

In this action I am stringing together our E-mail address with a new, empty workflow variable called txtCombinedApproverEmail.  I take the result of the concatenation and run it back in the txtCombinedApproverEmail variable.  It works like this:

First time through the loop where txtApproverEmail = “chris.clements@mydomain.com” and txtCombinedApproverEmail is empty.

chris.clements@mydomain.com;

The second time into the loop where txtApproverEmail = “cleme_c@mydomain.com” and txtCombinedApproverEmail = “chris.clements@mydomain.com;”

cleme_c@mydomain.com;chris.clements@mydomain.com;

PRO TIP: Notice that there is NO space between the E-mail addresses and the semicolon.  You are warned.

Step 6: Beginning the Approval Task

Finally, we are at the last step. That is to create the “Start a Task Process” action and assign our txtCombinedApproverEmail in the Participants field.  Remember to create this action outside of the for each loop.

And the configuration:

All that I am really concerned with here is setting the participants field to my workflow variable txtCombinedApproverEmail.  Let’s run the workflow and see if we get our task assigned to two approvers at once based on our selected “Finance” department.

Ah, and there you have it:

This is the entire workflow I created in this post:

Closing Thoughts

I have to be honest.  This feels like it shouldn’t be this hard to do in a tool such as Nintex for Office 365.  Seven steps?  Really?  If I were a “citizen developer” there is no way I could have pulled off a task like this. Knowledge of JSON encoding is essential.

Chris

 

 

 

 

 

Dynamically selecting one or more approvers as participants in a “Start a task process” action in Nintex workflows for Office 365

So, you want to delete users with the Azure AD Graph API? Good luck with that!

You might think that deleting users using the Azure AD Graph API would be pretty straightforward right?  You already have a registered application that succeeds in updating and creating new users.  This link doesn’t provide any warnings about hidden dragons or secret pitfalls.

Rest assured, there is at least one gotcha that’s primed to eat your lunch when it comes to deleting users.  Fortunately for you, True Believers, I’m here to show you how you too can quickly overcome this less than obvious configuration issue.

According the the Azure AD Graph Reference deleting user the is a simple operation.  All you have to do is send the HTTP Verb “DELETE” to the URL of the user you want to delete.

Example:

https://graph.windows.net/myorganization/users/user_id%5B?api-version%5D

The user_id can be the UserPrincipalName. In other words, the E-mail address of the user.

As an example, I will delete a pesky AD user named “John Doe”.  This John Doe character has got to go!

Azure

I use PostMan to to get my API calls properly formatted.  It also helps to ferret out problems with permissions or configurations. This helps me to *know* that it works before I write my first line of application code.

Note: Notice that I have an OAuth Bearer token specified in the header.  I won’t cover how I got this token in this post.  If you want to know more about how I acquire tokens for Console Applications send me an E-mail!

PostmanDelete1

Assuming you have your tenant ID, user ID, and OAuth token all set correctly then all you need to do is click “Send”.  Your user is deleted as expected… right?

NOPE! you encounter the following JSON error response:

{
“odata.error”: {
“code”: “Authorization_RequestDenied”,
“message”: {
“lang”: “en”,
“value”: “Insufficient privileges to complete the operation.”
}
}
}

Your first reaction may be verify that your application registration is assigned the proper permissions on the AD Graph.  However, there is no permission that allows you to delete. You can only get variations of Reading and Writing.

AzurePermission

What do you do?  If you Google Bing around a bit you will find that your Application needs to be assigned an administrative role in Azure. It needs a ServicePrincipal.  So, off you go searching the competing, overlapping, portals of Azure trying to figure out how to assign an application roles within a resource.  You may even be successful.  We weren’t.

I had to use remote PowerShell to add my application to the appropriate role in order to delete users from AD.

REMOTE POWERSHELL TO AZURE AD

I used instructions from this MSDN article to download and install the Azure AD Module.  First I downloaded the Microsoft Online Services Sign-In Assistant for IT Professionals RTW.  Next, I grabbed the Active Directory Module for Windows PowerShell (64-bit version).  Once I had my PowerShell environment up and running, I cobbled together a quick script to Add my Application registration to the “User Account Administration” role.  Here is how I did it!

THE CODEZ

# Log me into my MSDN tenant using an account I set up as “global admin”.
$tenantUser = ‘admin@mytenant.onmicrosoft.com’
$tenantPass = convertto-securestring ‘Hawa5835!’ -asplaintext -force
$tenantCreds = new-object -typename System.Management.Automation.PSCredential -argumentlist $tenantUser, $tenantPass

Connect-MsolService -Credential $tenantCreds

# Get the Object ID of the application I want to add as a SPN.
$displayName = “MyAppRegistrationName”
$objectId = (Get-MsolServicePrincipal -SearchString $displayName).ObjectId

# Set the Role name and the Add the Application as a member of the Role.
$roleName = “User Account Administrator”
Add-MsolRoleMember -RoleName $roleName -RoleMemberType ServicePrincipal -RoleMemberObjectId $objectId

PLAY IT AGAIN SAM

If you execute the PowerShell above (and it’s successful) then you can attempt to invoke the API again.  Click Send!

DeleteSuccess

Notice this time PostMan returns an HTTP status of 204 (no content).  This is the appropriate response for a DELETE.  Let’s check our tenant to ensure Jon Snow is dead or rather John Doe is deleted.

DeleteProof

He’s gone!  You are good to go.

CONCLUSION

Azure is a dynamic, new technology.  Documentation is changing almost daily. It can be frustrating to navigate the changing landscape of marketing terms and portals.

All the information you need to sort out this error is out there. However, I found it to be scattered and not exactly applicable to what I was doing.  The PowerShell snippets existed in parts, one to log in to a remote tenant, one to add the role.  This post simply serves to bring the information together so you can quickly get past this problem and on to writing more code.

 

Cheers!

 

 

So, you want to delete users with the Azure AD Graph API? Good luck with that!

Reading a SharePoint Online (Office 365) List from a Console Application (the easy way)

In a previous post I talked about our strategy of using scheduled console applications to perform tasks that are often performed by SharePoint timer jobs.

As we march “zealously” to the cloud we find ourselves needing to update our batch jobs so that they communicate with our SharePoint Online tenant.  We must update our applications because the authentication flow between on premise SharePoint 2013 and SharePoint Online are completely different.

Fortunately for us, we found the only change needed to adapt our list accessing code was to swap instances of  the NetworkCredentials class for the SharePointOnlineCredentials class.

Imagine that this is your list reading code:

using (var client = new WebClient())
{
client.Headers.Add(“X-FORMS_BASED_AUTH_ACCEPTED”, “f”);
client.Credentials = _credentials;  //NetworkCredentials
client.Headers.Add(HttpRequestHeader.ContentType, “application/json;odata=nometadata”);
client.Headers.Add(HttpRequestHeader.Accept, “application/json;odata=nometadata”);

/* make the rest call */
var endpointUri = $”{_site}/_api/web/lists/getbytitle(‘{_listName}’)/Items({itemId})”;
var apiResponse= client.DownloadString(endpointUri);

/* deserielize the result */
return _deserializer.Deserialize(apiResponse);
}

The chances are your _credentials object is created like this:

_credentials= new NetworkCredentials(username,password,domain);

Here, the username and password are those of a service account specifically provisioned a for SharePoint list access.

In order to swap the NetworkCredentails class for SharePointOnlineCredentails first, you  need to download and install the latest version of the SharePoint Online Client Components SDK here (https://www.microsoft.com/en-us/download/details.aspx?id=42038).

Once the SDK is installed  add a reference to the Microsoft.SharePoint.Client and Microsoft.SharePoint.Client.Runtime libraries.  Assuming a default installation, these binaries can be found here: C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\16\ISAPI\.

Be certain to reference the 16.0.0.0 version of the dlls.  If you get the 15.0.0.0 version (which is currently the version in NUGet) your code may not work!

Now you can “new up” your _credentials like this:

_credentails = new SharePointOnlineCredentials(username,password);

But “TV Timeout!” (as a colleague likes to say after a couple brews at the pub) the password argument is a SecureString rather than the garden variety string.  You will need a helper method to transform your plain old string into a SecureString.  Here is how we do it:

public static SecureString GetSecureString(string myString)
{
var secureString = new SecureString();
foreach (var c in myString)
{
secureString.AppendChar(c);
}
return secureString;
}

One last thing to note; the SharePointOnlineCredentials class implements the System.Net.ICredentials interface. That’s what allows us to simple swap one class for another.

Therefore,  if you are following the SOLID principles and using dependency injection then the extent of your code changes may look like this:

var securePassword = SecureStringService
.GetSecureString(settings.SPOPassword);

container.Register<ICredentials>(()
=> new SharePointOnlineCredentials(username, securePassword));

Now that is cool!

Cheers and Happy Coding!

 

Reading a SharePoint Online (Office 365) List from a Console Application (the easy way)

Applying the Concepts of the SharePoint App Model to SharePoint 2010

Legacy Code Is Still Out There

The SharePoint 2016 Preview was released in August and many companies are already moving toward the cloud and SharePoint Online.  However, a good number of enterprises still have SharePoint 2010 (and perhaps older) farms hanging around.   It’s likely those on premise 2010 farms host highly-customized SharePoint solutions and perhaps require occasional enhancements.  This is the case in our organization.

Our development team was approached and asked to enhance a SharePoint 2010 solution so that our site could display news feeds from an external vendor.  The site must cache feeds so that the page displays correctly even if the remote site is unavailable at the time of page load.  Naturally, we asked our SharePoint 2010 developer to devise a solution to this problem.  A short while later the developer delivered a technical approach that is steeped in SharePoint tradition.

The SharePoint Way of Doing Things can be Expensive, Time Consuming and Disruptive

The solution proposes to provision content types, site columns, and lists during in the usual way, via feature activation.  These two lists would hold the remote URL (feed) and the fetched content from the remote feed.   A timer job would read from the feed configuration list and fetch the data storing the results into a second SharePoint list.  Lastly, a custom (server side) web part would be created to read and display the contents of the retrieved news feeds list on the page with all the appropriate sorting, formatting, and style our users expect.

On the surface, this seems like a perfectly reasonable solution for the task at hand.   The use of a full-trust deployed solution to create needed plumbing such as content-types and lists was how it should be done in those heady, salad days of SharePoint 2010.  The proposed solution can confidently claim that it adheres to the best practices of SharePoint 2010.

However, there are drawbacks to going with a traditional SharePoint-based solution.  Before the advent of the sand-boxed solution in 2010 it was very easy for a poorly written SharePoint solution to adversely affect the farm on which it was installed.  Custom code has caused many a SharePoint admin sleepless nights. We don’t want to introduce new code to the farm if it’s not completely necessary.

Our team employs both SharePoint developers as well as .NET developers.  Our contract SharePoint developers command a higher hourly rate than our “run of the mill” .NET developers.  As our industry is extremely cost sensitive right now it would be great if we could avoid the use of specialized SharePoint developers for this one off project.

This last bit could be unique to our organization and may not be applicable to yours.  We have a stringent process for SharePoint deployments.  Suffice it to say, from the first request to have code promoted to test that a minimum of two weeks must pass before the code is deployed to production.  Content updates, such as adding web parts and editing pages is not subject to this testing period.  The ideal solutions would avoid an “formal” SharePoint development.

Why the SharePoint App Model is Cool!

The SharePoint app model was introduced with Office and Sharepoint 2013.   With the app model, Microsoft no longer recommended that developers create solutions that are deployed directly on the SharePoint farm.  Rather, developers create “apps” that are centrally deployed from an app catalog and run in isolation from SharePoint processes. SharePoint App Model code runs entirely on the client (browser) or in a separate web application on a remote server.  Apps’ access to SharePoint internals are funneled to a restricted and constricted RESTful API.

The app model prevents even the worst behaving application from affecting the SharePoint farm.  This means the farm is more stable.  Additionally, applications written using the App Model do not require a deployment to the farm or not the type of deployment that would necessitate taking farm into maintenance or resetting IIS.  Under the App Model SharePoint remains up even as new applications are made available.  Customers are happy you can quickly pound out their requests and make them available and admins are happy because your custom code isn’t taking down their farm (allegedly).

Sadly, the app model doesn’t exist for SharePoint 2010, or does it?  While specific aspects of the App Model do not exist in SharePoint 2010 you can still embrace the spirit of the App Model!  The very heart of the SharePoint App Model concept is running custom code in isolation away from SharePoint.  In our case we really only need to interact with SharePoint at the list level. Fortunately, SharePoint 2010 provides a REST API for reading and writing to lists.

Let’s re-imagine our solution and apply App Model-centric concepts in place of traditional SharePoint dogma.

First let’s use PowerShell scripts to create our Site Columns, Content Types, and lists rather than having a solution provision these objects on feature activation.

Next, let’s replace the SharePoint timer job with a simple windows console application that can be scheduled as a Windows scheduled task or kicked off by an agent such as Control-M.  This console app will read a SharePoint list using the REST API, then run out to fetch the content from the Internet writing the results back to a second list using the REST API.

Finally, we can substitute our server-side web part with a Content Editor Web Part that uses JavaScript/Jquery to access our news feed list via, you guessed it, the REST API.  The contents can then be styled with HTML and CSS and displayed to the user.

It’s noteworthy to mention that the UI aspect of this project could potentially suffer from the lack of a formal App Model and where a true Farm deployment may be superior.  In a true App Model scenario apps are deployed to a central App Catalog and can be deployed to sites across site collections.  In order to deploy this Content Editer Part to multiple site collections we would need to manually upload the HTML, CSS, and Javascript to the Style Library of each site collection.  Imagine having dozens or even hundreds of site collections. An actual solution deployment would have afforded us the ability to place common files in the _layouts folder where they would be available across site collections. Fortunately for us the requirement is only for a single site collection.

By cobbling together a set of non-SharePoint components we have, essentially, created an App Model-like solution for SharePoint 2010; a poor-man’s App Model if you will.

In my opinion, this solution is superior to the SharePoint way of doing things in the following ways:

  • Ease of Maintenance / Confidence – Using PowerShell to create columns, content-types, and list is better because scripts can be tested and debugged easily.  Deployments that provision sites are more complicated and time consuming.  From the perspective of a SharePoint admin PowerShell is likely a known entity. Admins can examine exactly what this code will be doing to their farm for themselves and perhaps gain a highly level of confidence in the new software being deployed.
  • Lower Development Cost / Ease of Maintenance A Windows console app is superior to a timer job because you don’t need to pay an expensive SharePoint developer to create or support a solution on a depreciated platform (SP 2010).  Maintaining a console application requires no specific SharePoint experience or knowledge.  In our case, we have an entire team that ensures timed jobs have run successfully and can alert on failure as needed.
  • Reliability / Availability – There is no custom code running within the SharePoint process.  This means there is NO chance of unintended consequences of misbehaving code created for this solution affecting your Farm.
  • Standards Based – HTML, JavaScript, and CSS are basically universal skills among modern developers and standard technologies.
  • No Deployment Outage – This solution can be implemented without taking down the SharePoint farm for a deployment.  Adding a simple content editor web part does not interrupt business operations.
  • Ease of Portability / Migration – Our solution, using a console app, HTML, and Javacript works just as well on SharePoint 2013 and Office 365 as it will with SharePoint 2013.  Whereas a traditional SharePoint solution cannot be directly ported to the cloud.

Conclusion

There is a lot of legacy SharePoint 2010 out there, especially in large enterprises where the adoption and migration to newer platforms can take years. Occasionally, these older solutions need enhancements and support.  However, you want to spend as little time and money as possible on supporting outdated platforms.

We needed a solution that had the following characteristics:

  • We didn’t want to continue to write new server-side code for SharePoint 2010.
  • We wanted a solution that didn’t require an experienced SharePoint developer to create and maintain.
  • We wanted code that was modular and easily migrated to Office 365.
  • We wanted to avoid a formal SharePoint deployment and its associated outage.

A traditional SharePoint solution was not going to get us there.  Therefore, we took the best parts of the SharePoint App Model (isolation, unobtrusive client side code, and RESTful interfaces to SharePoint) and created a holistic solution that fulfilled the customers’ expectations.

-Chris

Applying the Concepts of the SharePoint App Model to SharePoint 2010

Uploading an Existing Local Git Repository to BitBucket.

I use BitBucket for all my recreational, educational, and at home programming projects.  I like that fact that you can have free, private repositories. BitBucket supports Git as well as Mercurial.

Typically, I will create a new BitBucket repository and then use the Git Bash shell or Visual Studio to clone the project from BitBucket and simply add files to the new local repository.  However, there are times when I will start a local repository first and later decide that I like the project and want to save it off to BitBucket.

This is the procedure I use to upload an existing local Git repository to BitBucket.

Step 1 – Create a New Git Repository on BitBucket.

newRepo

Step 2 – Open your Git Bash Shell and Navigate to the Location of your Git Repository

Note: The location to the .git file is the path we are looking for.

$ cd Source/Repos/MyProject/

navigateRepro

Step 3 – Add the Remote Origin

Note: You will need to the remote path to your repository you created on BitBucket.  You can find this URL on the Overview screen for your repository in the upper right corner of the page.

$ git remote add origin https://myusername@bitbucket.org/myusername/myproject.git

addOrigin

Step 4 – Push your Repro and All its’ References

$ git push -u origin –all

You will be prompted to enter your BitBucket password.

pushAll

Step 5 – Ensure all Tags get Pushed as Well

$ git push -u origin –tags

Again you will be prompted to get your BitBucket password.

pushTags

If all goes well you will see the “Everything up-to-date” message displayed in the Git Bash shell.

The procedure above will move the entire repository. That means if you created local branches, the those are moved up as well. It’s pretty cool really.  Once the remote origin is set you can commit changes locally and then use Visual Studio’s built in Git support, or the Git Bash to Sync your changes “to the cloud”.

sourceView

Happy Coding!

Uploading an Existing Local Git Repository to BitBucket.

A Developer’s First Estimate Using Microsoft Project

At some point you will be asked to create an estimate for a new project, feature or bug fix. I use Excel for creating simple project estimates. Excel serves well for this purpose. I already know how to use Excel, it’s simple, and virtually anyone can view the resultant *.xlsx file. My Excel estimates usually look something like this:

bp1-figure1

What’s not to like about this estimate? It’s nicely formatted. I highlighted the bottom line, which is the only line pointed-haired bosses likely look at. The truth is that this estimate is just fine for simple cases like minor bug fixes and one-off, small projects. As a front-line developer, all my estimates were created quickly in Excel just l like the example above.

Now that I am a technical lead, the projects I am asked to estimate are larger, more complex, and involve tasks that may themselves consist of sub tasks. When an estimate expands to anything more complicated than a simple single-level list of tasks and hours then you need to consider using a serious estimating and scheduling tool like Microsoft Project.

In this blog post I provide a couple of tips that I have found useful for jumping into Project and using it much like I used Excel for “simple” project estimation. Again, these steps work for me. I hope you find them useful too.

Open Microsoft Project and Start with a Blank Project

You don’t need a PMP or PRINCE2 certification to use Microsoft Project. Before we get started, take a deep breath and relax. It’s not as bad as you think!

I start with a new Blank Project template. Project ships with a Software Development Plan template which may be very useful, but that is beyond the scope of this discussion.

bp1-figure2

Add the Work (hours) Column

Our Excel estimate consisted of a Task column and a column I used to sum up work hours. We see a task column but the Blank Project template does not include a work column. In order to make Project resemble my Excel estimates I want to add the work column.

Right click on the Duration column and select Insert Column.

bp1-figure3

You can filter the extensive list of columns by typing “Work” in the column header.

bp1-figure4

Now your screen should begin to look familiar. You will have a Task and adjacent Work Column.

The next thing I like to do is name my project on the first line. This is critical to getting your work hours to roll up. On the first line click into the Task Name column and type the name of your project. In this example I used “My Software Project”. Don’t worry about providing a value for the hours column on this line. Project will do this for us.

bp1-figure5

Next begin adding the individual tasks that comprise the project along with an hours estimate of each task. Your estimate should resemble the image below.

bp1-figure6

Grouping Tasks

Now, let’s do something very cool! Let’s highlight each of the tasks below the first line “My Software Project”. Once these lines are highlighted click the indent button in the tool bar.

bp1-figure12

bp1-figure7

Now you can see that the sum of the hours for all tasks have rolled up into the top level “My Software Project” task. Pretty cool eh?

More Task Grouping

You can’t have too much of a good thing. Just as it is really helpful to see the total hours of our project we can take that a step farther by rolling our tasks into subdivisions.

Our estimate contains two distinct types of activities. Those where we are actually building the solution and those where we are testing and reworking our solution. Using the same steps we performed above we can further group related tasks by indenting. However, we first we need to add a couple rows to represent rollup lines. We are going to add Development and Testing tasks.

Tip: you can easily add tasks by clicking into the row where you want the new task added and pressing the insert key. The task will be inserted on the current line and existing tasks will be moved down to accommodate the new row.

Next we are going to highlight the tasks that fall below Development and above Testing and click the indent button. Repeat these steps for the tasks that fall below Testing but before Deployment. Your estimate should look like this:

bp1-figure8

Notice how all the “development” tasks are rolled up to the development line and all the “testing” tasks are rolled up to the Testing line. Also note that the overall project My Software Project rolls up the total hours for the entire project!

Nesting your tasks allows you to easily analyze your estimate and know where the hours are going. In this case, we can see that roughly half of the project time is spent actually writing code whereas the other half is dedicated to testing an rework. I can almost smell the cardstock on my PMP certification!

Predecessors and Scheduling

We could just leave well enough alone. As it stands, we created a perfectly good, simple estimate using Microsoft Project. However, I feel that I would be remiss if I didn’t mention predecessors and scheduling.

You know as well as I do that once your boss drinks in the beauty that is your estimate the next words that are spoken will be “Looks great! When can we have this in production?” Let’s address that question.

To complete our simple estimate we are going to spend just a few moments on simple scheduling! This means filling the predecessors column for our tasks and setting the task mode.

Setting the Task Mode to Auto Schedule

Once you have a good understanding of the task details of a project, you can switch the task’s mode to Auto Schedule. To do this, click into the Task Mode column and select Auto Scheduled from the drop down list. Repeat this selection for every task in the project. Setting the task to auto schedule allows project to create a project schedule based on the amount of work, predecessor tasks, the number resources assigned to the task, and other factors such as weekends and work holidays.

bp1-figure9

Setting Task Processors

Setting a task’s predecessor helps Project create a schedule by defining which prerequisite tasks must be completed before a subsequent task can begin. For instance setting Task B’s predecessor to Task A constrains Task B’s start date to the completion (end) date of Task A. To do this we use the predecessor column in project.

bp1-figure10c

 

Notice that the predecessor column in task four (4) is set to task three (3). This means that task four (4) cannot begin until task three (3) is complete. I filled in predecessor values for all the tasks in the project.

Tip: Don’t set the predecessor value for a column that used for rolling up sub-tasks. For instance I left the predecessor of the Development and Testing columns blank. Instead, set the predecessor of the first task in the group to the last task in the preceding group.

Assuming you have set all your predecessor tasks correctly and your tasks are set to auto schedule you should begin to see the Gantt Chart fill itself out with your estimated project schedule.

bp1-figure11

This is Just the Beginning

The purpose of this blog was just to get you started using Project for estimates. The goals is that you could hand the estimate over to a PM that would transform your estimate into a fully realized project plan.

There are many other concepts that apply to scheduling such as resources and concurrent tasks. Perhaps these are a topic for a subsequent post.

A Developer’s First Estimate Using Microsoft Project