Imagine you have a list or library item of a custom contenttype and you want a user to update an existing field of the contenttype via a “Collect data from a user” action. Further you want the Task to show the current value of this existing Field when it is edited by the user. After completing the Task, the new value of the field should be saved to the original Item on which the workflow was started. A complete circle:

First you need a custom contenttype. I created an example contenttype with 1 extra column: “Some Extra Comment”. The value of this column will be sent to the Task. Add the newly created contenttype to a Document Library.

Next in SharePoint Designer you create a new reusable workflow for the contenttype you have just created. Add the action “Collect data from a user” to the workflow. Enter a title (I named it “Get some data”) and optional a description for the action and click next. Now add a field named “ExtraComment” of type “Single line of text”:

Click next and leave the default value empty. Click finish. Add a “User” to you “Collect data from a user” action. Your action should now look something like this:

Now this is very important: The ID of the newly created Task will be stored in the local workflow variable “collect”.

Next save and publish the workflow and associate the workflow with the contenttype:

Now add a document to your library with your newly created contenttype and run the workflow on the document. This is necessary because the custom column in your “Collect data from a user” action (“ExtraComment”) has to be added to the Task list. This is done at the moment a Task is created.

So go to your task list and be sure a task is created in the task list. Next complete the task. The workflow on the original item should now also be completed.

Now re-open the workflow in SharePoint Designer and add the “Set Field in Current Item” action. This action will be executed AFTER the Task has been completed. So “ExtraComment” will have its new value and we have to get this value from the task in the tasks list:

Everything is pretty standard: the datasource to get the value from is “Tasks”. The field is “ExtraComment”. The specific item the value should come from is the value where ID equals “collect” (which is, as you will remember, the ID of the task that was created).

Your workflow should now look something like this:

Publish your workflow. At this moment all you need to do is make sure that when the task is created, it gets the current value of “Some Extra Comment” from the item the workflow is run on. For that we need to create a List Workflow on the task List:

Add a “Set Field in Current Item” Action. Now this time the datasource is the document library that has the original workflow item (in my case I used “Shared Documents”). The field you want the value from is “Some Extra Comment” and the specific item you want this data from is where the ID corresponds to the field “Workflow Item ID”:

Your Task List workflow should look something like this:

Publish your Task List Workflow.

Now make sure that the Task List Workflow is automatically started when a Task is created. You need to do this because the data from the original Item has to be retracted on the moment of Task creation. You can do this in SharePoint Designer under “List and Libraries”, next select the Task list and change the settings of your workflow to:

Save the settings. Everything should work now! Add a document to your library and set “Some Extra Comment” to “This is my first comment”. Now run the workflow and open your Task. It should look something like this:

Add some text to the “ExtraComment” field and complete the Task:

Now go back to your your original item. The workflow should be completed. Next select “View Properties”:

April 28 and april 29 we (me and some colleagues from QNH) went to the DevDays in The Hague (Netherlands). I decided to write a small summary about what we saw and to give some resources for further reading. So here it goes:

Azure and AppFabric
There were al lot of sessions about Windows Azure. Everything indicates that the cloud is really taking a flight in 2011/2012. We got an introduction to the 3 different middle-ware services that Azure at the moment offers:

LightSwitch
Next went to see the presentation “LighSwitch Beyond the basics” by Beth Massi. LightSwitch is a framework on which you can build mash-up applications in Visual Studio really fast. With LightSwitch you can connect to a lot of different datasources. Datasource types include SqlServer, WCF RIA and yes: SharePoint lists and libraries. Next you can create a datamodel based on the datasources. For example: you can connect your SQL table to a SharePoint Library. The creation of the datamodel feels a lot like creating a datamodel in MS Access. It is also possible to add validation to your fields or entities (like: the date in this date-field should be after 1/1/1990). On top of your datamodel you can create CRUD (Create, Read, Update & Delete) screens based on SilverLight 4. And all I described above can be done without writing a single line of code!

However, the best thing about LightSwitch is that you can write code if needed. Events in the validation, the datasources and the views can be overridden. Further it is possible to create your own SilverLight controls that can be used in the views (for example the Bing Maps Control). LightSwitch is still in beta, but I think it’s definitely worth taking a look: http://msdn.microsoft.com/en-us/lightswitch/ff796201. I will be writing some more posts on LightSwitch and SharePoint.

MVC 3
We went to see Scott Hanselman’s presentation about MVC 3 (somehow the Microsoft MVC framework reminds me a lot about the Java Struts framework). Scott gave an introduction to “Razor”, a syntax for creating ‘cleaner’ mark-up in hybrid html/code aspx pages (something, which I remember, was also always an issue with Java JSP pages) For an introduction to Razor see Scott Guthrie’s blog http://weblogs.asp.net/scottgu/archive/2010/07/02/introducing-razor.aspx.

Scott talked about Modernizr. With the coming of HTML 5 and CSS3 there are browsers that support the new features and browsers that do not yet support these features. When you create a new website you want to make use of the newest features, but you also want your website still to work on older browsers. With the Modernizr framework you can make your website backwards compatible. In other words: if the browser supports the new technologies, it will make use of these new technologies. If it does not, instead it will render controls and features that it does support. For more information on Modernizr see http://www.modernizr.com/.

Something else he talked about was NuGet (see http://nuget.codeplex.com/) a tool for making use of third party libraries in .Net development easier. NuGet automates the tasks of downloading and incorporating third party libraries (including dependencies) in a project.

Here’s an impression of the DevDays 2011 The Hague Netherlands (the photo was taken with my phone so it is not really good quality…):

When you upload a docx to a document library, SharePoint adds custom column metadata to the document. For example: I’ve added 2 custom columns to a library. 1 Text field (CustomColumn1) and a dropdown column (CustomColumn2). Next I create an example docx document named “my_testdocument.docx”. I upload the testdocument to the document library and fill in the extra metadata:

Next I download the document to my desktop and change the docx extension to zip, so I can open the document. You can see that inside docx is completely xml:

I open the zip and look in the “customXml” folder for my custom columns. They appear in a file named “item2.xml”. Next I open the “item2.xml” file with an editor (in my case visual studio) and change the values of my custom column metadata:

After saving the file and renaming the file to “my_testdocument2.docx” I upload the edited file to the SharePoint Library:

Spliting a contentdatabase in SharePoint 2010 can be usefull when a sitecollection becomes to big and you want to move it to another database. This same technique can also be used to combine sitecollections from different content databases to one content database (maybe because in the end the content did not grow as fast as you would have expected ).

Start by exporting the sitecollection. Go to central admin and choose Backup and Restore:

Next choose ‘Perform a sitecollection backup’:

Choose a sitecollection and give a name and location for your backup:

You have now exported the sitecollection. Next we will import the sitecollection in another contentdatabase. Go to central admin -> Application management -> Manage Content Databases and create a new contentdatabase for the webapplication in which the above exported sitecollection lives:

Now we will move the sitecollection to the other contentdatabase. Start up Powershell (SharePoint 2010 Management Shell) and enter the following command:

I just created my first project on Codeplex. I created 2 custom document set actions that could become handy when using document sets in SharePoint Designer workflows.

With the first action you can get a property (a column value) from the upper document set while the workflow is running on the underlying document. The value is stored in a local workflow variable and can be used to update a column of the current item or to make a workflow decision.

In English the above action looks like:
Returns the Column of the parent DocumentSet in Variabel.

The second action I created unrecords all documents in a document set. This action becomes handy if you want to send a document set to another location while some of it’s underlying documents are in-place records or records.

In English the above action looks like:
Unrecord all Documents in DocSet

First impression: the emulator in Visual Studio to test your application works great. Downside is that if you want to deploy your program to a real device you have to become member of XBOX Live Marketplace which costs $99. I mean: it is my device, so I can do with what I want, right?

This is a very generic error message and the SharePoint error log does not give a lot of extra information.

Yesterday, testing a ‘send to’ connection to a record center, we kept getting this message only when trying to submit a documentset. We had no problems submitting individual documents.

The solution is that you have to create at least 1 routing rule in the ‘Records Pending Routing’ library that you are sending your documentset to. After creating the rule everything worked as it should.