The SharePoint 2013 app store is rife with possibilities. While the number of apps in the store is constantly growing, there is a lot of opportunity for all sorts of apps to be put into the store. TCSC recently made the jump into the App Store with our very first app, Profile Jumpstarter.

The idea for the app came from a recent push to get all of our employees to fill out their profile in SharePoint. They had some basics but for the most part they were pretty blank. We needed something to encourage them to fill it out and see at a glance how far away they were from completing everything. We looked in the store and couldn’t find exactly what we were looking for. So we thought, “Hey, we have some pretty darn good developers in house. Let’s write our own!” So we did.

I presented this topic at the Virginia Beach SharePoint Saturday on Jan 11, 2013. And here is a full list of my slides, in case you want to check those out.

And without further ado, here are my Top 10 things learned when creating a SharePoint App:

  1. Writing an App is a lot different than writing a traditional farm solution.

    When writing traditional SharePoint solutions you have access to pretty much everything. When writing an App you have a limited set of things you can have access to (even more so if you are writing a SharePoint solution).

  2. There are a ton of options when starting an app.

    When creating your app there are a lot of options. SharePoint Hosted, Auto-Hosted or Provider-Hosted? Web Forms or MVC? Do you want to use Azure Access Control Service or supply your own certificate? There is a lot to consider with each option, and can be a little overwhelming.

  3. Give yourself plenty of time to get into the store.

    When getting the first app in the store make sure that you give yourself plenty of time to go through the approval process. You may find some surprises. They told us that our company (that has been around for over 30 years) was out of business.

  4. The good people reviewing your app are actually quite helpful and give detailed errors/issues.

    The people who are reviewing your app will give you a PDF of all the issues they uncovered when testing your app. This PDF may contain screenshots of exactly what they saw, what requirement you failed on, and possibly some remediation steps if they can. Very helpful.

  5. You have to support IE8.

    SharePoint 2013 supports Windows 7 and therefore IE8 which shipped by default on Windows 7. If you don’t support IE8 you simply have to mention that in your description of your app.

  6. There are some pretty specific and rather esoteric size requirements for app icons and screenshots when submitting to the store.

    I think this bothered me more than it really should have, but the size requirements of the screenshots and the app icon you are required to upload when submitting your app are really strange. Your app icon must be 96×96, and the screenshots you upload have to be 512×384. The app icon I can see more, the 96 is a strange number, but you had to pick something and you want to everything to look uniform. Sure, makes sense. But the 512×384 is just odd. It isn’t a normal screen resolution, and seems just out of the blue.

  7. You can develop in any language and on any platform when creating apps.

    If you wanted to create a SharePoint app on a LAMP (Linux, Apache, MySQL, PHP) stack you absolutely could. Obviously this would have to be provider hosted, but the point is that you can do it.

  8. You have to be language and region specific when submitting to the store. Just English was not enough. We had to specify English-US.

    You have to be very explicit when defining your languages. I was unable to just say English. I had to say US English. While my app should work perfectly well for our friends across the pond, I only listed English-US as a supported language.

  9. You cannot auto-host an app and put into the app Store.

    While auto-hosting your SharePoint app is an awesome idea and is a great option in many cases, you cannot auto-host an app and put it into the SharePoint Store. Which I suppose makes sense—this would open up a lot of security holes.

And the number 1 lesson learned when writing a SharePoint 2013 App is…

  1. Creating apps is relatively easy and painless. You can do it! Or of course we do have some pretty darn good developers in house…but I am a little partial.

This post is also posted on my company site

Tagged with:  

Raleigh Code Camp Re-Cap

On November 11, 2013, in AngularJS, JavaScript, Speaking, by admin

I had a great time this past weekend at Raleigh Code Camp where I was lucky enough to be chosen as a speaker.  I gave a talk on one of my favorite topics, AngularJS.  I had some people come up to me at the end and ask some questions about best practices since in my talk, I kept repeating the phrase “If this was a real application I wouldn’t really do it this way, this is only for demo purposes.”  So as promised I am putting up some links for some lessons learned and some further reading.

  • First and foremost the Angular API.  This is invaluable in your quest to become an angular master
  • AngularJS Sticky Notes Architecture - More lessons learned from the field and he covers a few of the same growing pains that I ran into and how to handle them
  • This is a 3 part series on Angular Best Practices that I found very useful.  It covers some pieces of angular that I didn’t cover in my talk such as Angular Seed and testing your app

 

Here is my slide deck and Code Samples from my presentation.

Tagged with:  

Cloud Seminar Presentation

On October 1, 2013, in Sharepoint, Speaking, by admin

I know I have a been a bit quite out here for the past few months. Been very busy with work and at home. But I just wanted to put out my slide deck for a recent presentation I did about Office 365 and SharePoint 2013 apps.

Look for more frequent updates coming soon also go check sign up for the SharePoint Revolution Seminar where I will be talking about LightSwitch on Oct 16 in Virginia Beach and October 17th in Richmond

 

Tagged with:  

The trend for writing web services in the .Net community is moving from SOAP to REST. There are many reasons for this, but the most compelling may be the difference in “weight.” When using SOAP you must wrap everything in an envelope before sending, with the REST protocol you basically just send it. There are very few restrictions on what you can send over.

Consuming these new REST services is very simple and very well understood, and it turns out that writing them using MVC is extremely easy as well. Starting with MVC4 you can use the WebAPI to create a REST service very quickly and using a model that you are familiar with.

The Employee Service

For the purposes of this demo I am going to be creating a service to pull employees. In order to do that I need to create an empty EmployeesController,

using System;
using System.Collections.Generic;
using System.Linq;
using System.Net;
using System.Net.Http;
using System.Web.Http;
 
namespace ODataTest.Controllers
{
    public class EmployeesController : ApiController
    {
 
    }
}

As well as an Employee Model:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
 
namespace ODataTest.Models
{
    public class Employee
    { 
        public int Id { get; set; }
        public string FirstName { get; set; }
        public string LastName { get; set; }
        public string ManagerName { get; set; }
    }
}

In the EmployeesController class you can see that I am inheriting from ApiController instead of the normal MVC Controller class. And in a nutshell that is pretty much all you need to do in order to create a REST Service. I told you this was easy right?

My First Call – Get All

This class doesn’t have anything in it so if you were to call this you wouldn’t get anything back, so let’s implement the default call and return all employees. I am not going to create a data access, this is just a demo so I am going to create all my employees in my “Get” method and just return them there.

        [HttpGet]
        public Employee[] Get()
        {
            return new[]
                {
                    new Employee{Id = 1, FirstName = "Tony", LastName = "Stark", ManagerName = "Nick Fury"} ,
                    new Employee{Id = 2, FirstName = "Bruce", LastName = "Banner", ManagerName = "Nick Fury"} ,
                    new Employee{Id = 3, FirstName = "Bruce", LastName = "Campbell", ManagerName = "Bruce Campbell"} ,
                    new Employee{Id = 4, FirstName = "Peter", LastName = "Parker", ManagerName = "Aunt May"}  
                };
        }

There isn’t much here. We have a method named Get that is decorated with the HTTPGet attribute. This is telling the parser that you can call a post against this method. If I call this method I will receive a JSON representation of the Employee object back. I didn’t have to “stringify” this, the MVC parser will take care of all of that for you.

If we wanted to consume this method we would call a URI that matches the following pattern http://<yoururl>/api/Employees and from this call we would receive the following JSON back

JSON from service call

Get Single Employee

That is a pretty simple case, so let’s expand on that. If we wanted a specific employee we would need to add the following method

        public Employee Get(int id)
        {
            var employeeList =  new List<Employee>
                {
                    new Employee{Id = 1, FirstName = "Tony", LastName = "Stark", ManagerName = "Nick Fury"} ,
                    new Employee{Id = 2, FirstName = "Bruce", LastName = "Banner", ManagerName = "Nick Fury"} ,
                    new Employee{Id = 3, FirstName = "Bruce", LastName = "Campbell", ManagerName = "Bruce Campbell"} ,
                    new Employee{Id = 4, FirstName = "Peter", LastName = "Parker", ManagerName = "Aunt May"} 
                };
 
            var emp = employeeList.FirstOrDefault(x => x.Id == id);
            return emp ?? new Employee();
         }

The MVC framework will parse your URI, which should look like http://<yoururl>/api/Employees/1, and then find the correct Get method. This is all done by convention so if the last bit of your URI isn’t an integer this will fail. When I call this service I should get a JSON object back that represents Tony Stark. You may have noticed that I didn’t put the HttpGet attribute on this method. I could put in on there and everything would work fine. It turns out that because the method is named Get, the parser realizes that it is a get method and won’t try to post against it.

Summary

This was a very quick scratching the surface of the WebAPI. I will be posting more about the WebAPI in the coming weeks covering more in depth topics such as posting back to the service to insert data and then using the ODataController to do some neat stuff with sorting and filtering.

To quickly recap what we learned today:

  • Creating REST services with MVC4 and the WebAPI is very simple
  • Instead of using Controller use ApiController to create a REST service
  • Just like using base MVC the WebAPI is very convention based, so things are very easy to follow
Tagged with:  

HTML5 Localstorage

On May 22, 2013, in HTML5, JavaScript, JQuery, by admin

One of my favorite features of the ‘new’ HTML5 spec is the use of local storage. Local storage is a persistent store in the user’s browser that can save key/value pairs without much overhead. This data is never transferred to the server, it only ever exists locally (hence the name), and can be used for any sort of data storage. There is an upper limit on the amount of data that you can store, and that is dependent on the browser.

Why would I want to use local storage?

As I mentioned before you can use it for any data storage. So there aren’t any real limitations on what you can store. However everything is stored in plain text, and is stored in the browser so you wouldn’t want to put sensitive information in there.

I tend to use local storage for static data that I know that I am going to use over and over again throughout an application, such as look up data. Local storage is very useful when creating single page applications (SPAs) because you can load the data into local storage and never have to go back to the server to grab it. And if this is a site that your users are going to be on a lot they won’t have to go back to the server the next time around to get it since the data is already stored in their browser.

Just a quick word about private browsing. You can still use local storage when in private browsing. However that data is only persistent during the private browsing session. If you stored things in local storage before going into private browsing those will not be available during the private session. Most applications won’t have to worry about this, just something interesting to note.

Usage

In the example below I am going to be using JQuery and a tool called Modernizr in order to make testing for local storage a little bit easier. Modernizr is a JavaScript file that will allow you to test for many different HTML5 and CSS3 features before you try to use them to allow you to fail gracefully. I highly recommend you check it out.

So let’s look at some code

function addLocalStorage(key, value) {
   if (Modernizr.localstorage) {
      localStorage[key] = value;
   }
   else {
      alert("No localstorage");
   }
}

This small function takes in two variables, and adds the value to local storage. As you can see there isn’t a whole lot here. And that is, one reason, why local storage is so nice, because it is so simple to use. As you can see in my if-statement I am testing to see that local storage exists before I try to use it. In actual production code I would have a fail-over to provide the same functionality in either case.

If I already know what I want my key to be I can also implicitly add the property to the localstorage variable. In the example below I am storing an array of different states into a variable named States in local storage.

 localstorage.States = [ 'Active, Inactive, Pending' ];

So where does this actually live. This is a screen shot from Chrome’s developer tools that shows what my local storage looks like
local storage in browser
You can see I have a Key/Value pair for Hello world, and an entry for my different states.

Now that we put things into local storage how do we get it out?

 function readLocalStorage(key) {
    if (Modernizr.localstorage) {
       return localStorage[key];
    }
    else { alert("No LocalStorage"); }
 }

Again, not a whole lot here. Just use the key to get things back out. You can also use the implicitly typed example (localstorage.States).

Summary

  • You can use local storage to quickly and easily store data in the user’s browser
  • Storage is persisted even after browser is closed.
  • Don’t store any sensitive information as everything is stored in plain text.
  • Don’t store volatile data so you don’t have to keep swapping out the information stored there.

This post is also posted over on my company’s blog. Go

Full Code listing

<!DOCTYPE html>
<html lang="en">
    <head>
        <meta charset="utf-8" />
        <title></title>
        <script src="scripts/jquery-1.9.0.js"></script>
        <script src="scripts/modernizr-2.6.2.js"></script>
 
 
        <script>
 
            $(document).ready(function () {
 
                $("#btnSave").click(function () {
 
                    var key = $("#txtKey").val();
                    var value = $("#txtValue").val();
 
                    addLocalStorage(key, value);
                    alert("Saved");
                    
                    $("#txtKey").val();
                    $("#txtValue").val();
 
                });
 
                $("#btnGetValue").click(function () {
                    var key = $("#txtKeyForRetrieve").val();
 
                    var value = readLocalStorage(key);
 
                    $("#storageValue").text(value);
 
                });
 
                function addLocalStorage(key, value) {
                    if (Modernizr.localstorage) {
                        localStorage[key] = value;
                    }
                    else {
                        alert("No localstorage");
                    }
                }
 
                function readLocalStorage(key) {
                    if (Modernizr.localstorage) {
                        return localStorage[key];
                    }
                    else { alert("No LocalStorage"); }
                }
            });
 
        </script>
    </head>
    <body>
 
        <section>
            <h2>Save to LocalStorage</h2>
            <p> Key: <br/>
                <input id="txtKey" type="text"/>
            </p>
            <p>
                Value: <br/>
                <input id="txtValue" type="text"/>
            </p>
            <br/>
            <input type="submit" value="Save" id="btnSave"/>
        </section>
 
        <section>
            <h2>Retrieve Localstorage</h2>
            <p> Key: <br/>
                <input id="txtKeyForRetrieve" type="text"/>
            </p>
 
            <div id="storageValue"></div>
                
            <input type="submit" value="Retrieve" id="btnGetValue"/>
        </section>
    </body>
</html>

Using Microsoft Fakes Part 2: Shims

On May 1, 2013, in .Net, Unit Testing, by admin

This is part 2 of Using Microsoft Fakes. If you missed the first one, check it out.

In the first part of this series I talked about using Stubs to unit test your code and they work great but sometimes you don’t have control over everything, you can’t exactly inject the System dll into your code. So how do you unit test logic that has a dependency on something you have don’t control over? Enter Shims.

Shims allow you to intercept a call at run time. This differs slightly from the stubs where we are passing in the stubbed out code. Very small difference, and the technique is very similar.

Using Shims

I am going to use the same example as in my previous post where I trying to test out some business logic for saving a person, but in this case I am going to be using as GUID as an identifier instead of an integer getting passed back from the data access layer.

        public Guid Save(Person p) {
            p.Name = "Changed Name";
            p.UniqueId = Guid.NewGuid();
 
            _personData.SavePerson(p);
            return p.UniqueId;
        }

So I don’t have control over the get new Guid. There is no way I can inject this functionality and no way that I can test to make sure a specific Guid gets assigned. I realize this is a bit contrived since you would never need to know a specific Guid gets generated, but hey this is just an example of usage, no one ever said it had to make sense!

Ok, so now that I have that disclaimer out of the way, let’s move onto the unit test code. Just as before I am going to inject the stubbed out version of IPersonData, but now I am now to put a shim in front of the new Guid.

using System;
using System.Fakes;
using Data;
using Data.Fakes;
using DataContracts;
using FakesTemp; //The name of my business logic class
using Microsoft.QualityTools.Testing.Fakes;
using Microsoft.VisualStudio.TestTools.UnitTesting;
 
namespace testProj {
    [TestClass]
    public class PersonTests {
        [TestMethod]
        public void PersonTests_HappyPath()
        {
            IPersonData personData = new StubIPersonData
            {
                SavePersonPerson = person => { return person.Age; }
            };
 
            PersonManager target = new PersonManager(personData);
 
            var p = new Person
                {
                    Age = 28,
                    Name = "Bruce Campbell"
                };
 
 
            Guid expected = new Guid("187695AC-1EE6-4BB4-BB71-366DBE9C8D0D");
            using (ShimsContext.Create())
            {
                ShimGuid.NewGuid = () => new Guid("187695AC-1EE6-4BB4-BB71-366DBE9C8D0D");
                Guid actual = target.Save(p);
                
                Assert.AreEqual(expected, actual);
            }
        }
    }
}

A few things to note. In order to generate the shim for the Guid.New I had to create a fakes assembly for the System dll. To do this you simply right click on the dll in your references folder in solution explorer and click on the add fakes assembly option. After that just add the using statement for the new System.Fakes assembly.

To get started using Shims, you must first create a ShimContext. This defines the scope for the shim, any shim that you create in this block will be used anywhere within the call. So if I had multiple calls to Guid.NewGuid() throughout the execution of my test they would all get the same Guid. So just keep in mind that using shims for built in code can cause some weird results if you aren’t careful.

In order to actually create the shim you simply preface whatever object you are shimming with the word shim. Take a second for that to sink in. That’s pretty much it. From there you define what specific method you are overriding, and then tell it what you want the value to be. Very very simple.

Summary

  • Using shims allows you to overwrite inbuilt methods at run time of your unit test
  • You should limit the scope of your shim to as narrow as possible to reduce the risk of unintended shimming
  • Combining both Stubs and Shims allows you to really isolate your code for testing
  • Unit testing is AWESOME!

This is also posted on my company blog

Tagged with:  

Using Microsoft Fakes Part 1: Stubs

On April 29, 2013, in .Net, Testing, Unit Testing, by admin

Starting with the release of Visual Studio 2012 Update 2 the Fakes framework is now available to use in both the Premium and Ultimate versions! HURRAY! Why am I so excited? Because this will allow me to unit test my code.

Now I know everyone doesn’t get as excited about unit testing code as I do but that’s ok. By the end of this 2 part series hopefully you will at least be a little excited, and realize that it isn;t as big of a chore as you may think.

The Fakes framework is based off of the moles and pex frameworks that we have been using for a while now. So what’s new in fakes? In terms of general usage, not a whole lot a lot of the changes are under the hood.

The biggest differences are:

  1. You don’t need to use the HostType Attribute
  2. Detours are referred to as Shims instead of Moles.
  3. Moles files are now fakes files

So a lot of things seem pretty cosmetic.

In this first post I will be talking about stubs, but before we do that let’s quickly discuss the difference between stubs and shims. You would use a stub when you are using a dependency injection pattern combined with sort of IOC container such as Unity.

You use a shim when you want to override the behavior of code that you really don’t have control over; something that you can’t pass in. So if you wanted to change the behavior of an inbuilt .Net method you could use shims to do that.

Using Stubs

In my example I have the following business logic code that I want to test out.

  public class PersonManager
    {
        private IPersonData _personData;
        public PersonManager(IPersonData personData)
        {
            _personData = personData;
 
        } 
  
        public int Save(Person p)
        {
            p.Name = "Changed Name";
 
            return _personData.SavePerson(p);
        }
    }

I am injecting my data access code in the constructor above, so this is a great example of when I want to use a stub. The code that I want to test is the Save method and as you can see there isn’t a whole lot there. But that is fine for our example.

In order to test this I first need to create my unit test project, just as you always have no changes there. The next step is to add all of your assemblies that you will need. In this case I need my data contracts, data access and my business logic layers in order to get this example working but the only thing that I need to fake would be my data access code since I want to isolate my business logic. In order to do that, assuming you have fakes installed, right click on your reference and click on add fakes assembly. Easy as that. I am now ready to write my unit test.

using System;
using Data;
using Data.Fakes;
using DataContracts;
using FakesTemp; //Name of my Buiness logic project
using Microsoft.VisualStudio.TestTools.UnitTesting;
 
namespace BusinessLogic.Tests {
    [TestClass]
    public class PersonTests {
        [TestMethod]
        public void PersonTests_HappyPath()
        {
            IPersonData personData = new StubIPersonData {
                SavePersonPerson = person => { return person.Age; }

            };
 
            PersonManager target = new PersonManager(personData);
 
            var p = new Person
                {
                    Age = 28,
                    Name = "Bruce Campbell"
                };
 
            int expected = 28;
            int actual = target.Save(p);
 
            Assert.AreEqual(expected, actual);
        }
    }
}

If you have been using Moles for a while, this shouldn’t look very different to you. The only major difference would be using the StubIPersonData instead of SIPersonData that you would see if you were using Moles.

Taking a closer look at StubIPersonData we can see that we are stubbing out the SavePerson method that my business logic above is calling. So why is it called SavePersonPerson? The second ‘Person’ is the type of argument I am passing in. For example if I was passing a string into the method the stub would look like SavePersonString.

The next part is where I am declaring the parameters I am passing in, in this case person. This will catch whatever I am passing in and I can then manipulate the object in my stub if I need to. In this example I am just going to return whatever the person’s age is just to prove that point, but you can return any integer that you wish, since that is the return type of the SavePerson method.

From there you just create the class you are testing, by convention named target, and pass in your stubbed out IPersonData interface. Then create a Person object to pass into my save method, and then actually call the save method and capture the result. I am expecting to get back the age of the person, so I want to assert that is actually what I am getting back.

Taking it further

So in the example above, I am only stubbing out one method in one class, but you can stub any number of classes and any number of methods inside of those classes. Just because the method exists in your interface however doesn’t mean that you have to stub it if the method you are testing doesn’t use it. There is no reason to add clutter in your code.

Summary

  • Fakes are now available to users in both the Ultimate and Premium versions of Visual Studio 2012 assuming you have update 2 installed.
  • Fakes assemblies allow you to either create stubs or shims
  • Stubs allow you to override implementations of classes you pass in using an IOC container or something similar.
  • Shims allow you to override built in classes (or classes you don’t have control over) at run time.
  • Unit testing is AWESOME!

Look for part two of this series where we are going to talk about using shims!

This is also posted on my company blog!

Tagged with:  

Getting Started with Web Sites on Windows Azure

On April 24, 2013, in Azure, by admin

Creating a new website has never been easier.  With just a few clicks in Windows Azure you can have a new website up and running, and then with just a few more you can create a brand new WordPress blog.  If you haven’t guessed by now this site is running WordPress on Windows Azure.  I am going to run through how to set up a WordPress blog and to set up the DNS in order to point your domain to it.

Setting up the Site

First, log into your Azure Management Portal and click the add button at the bottom of the screen.  From there, click Computer -> Web Site -> From Gallery, which will open up a new window.  If you want to just create a simple site you would choose quick create or custom rather than from gallery.

Create New Website

Once you click on From Gallery, a new window will come up, and then select the WordPress option and click next.  The next page will have you create a new URL and database and also select what region you want that housed in.  Accept the EULA for MySQL and click create.

Once your site is up and running you can navigate to it and go through the “Famous five minute WordPress installation process.”

newsite

Setting up a custom URL

If you want to add a custom URL to your new WordPress site, you must scale your site from a free model to a shared model.  In order to do that you must first buy a domain name from your friendly neighborhood domain registrar.  I use GoDaddy, but there are many other options out there.  Once you have that purchased,  log back into the Azure management portal and navigate to the dashboard for your site.  Then click on the scale tab, select shared, and click save.

Next, move over to the configure tab.  You should see a button for managing domains.  Click that and a pop up comes up that tells you exactly what you need to do if you read all the fine print.  At the bottom of the window you should see the IP address that you need for your A records. Copy that address and navigate over to your domain registrar and add your A record.

manageDomains

That record takes few hours to update and a day at most.  Once your A record propagates, go back to your Azure management portal and add the domain to the list.  Azure will verify that the A record exists before allowing you to continue.  That will take care of the base URL ex. rivercityprogrammer.com.  If you want to put the www in front you must add the awverify.www and www CNAME records to your DNS zone.  And again, wait for the zone to propagate. Then go back to the management portal and add the full www domain to the domain list.  After that you are good to go.

Summary

  • Using Azure allows you to quickly create websites
  • Combine Azure websites with the WordPress installation and you can be up and running in minutes
  • Adding a custom domain to your site may take a few hours for all the records to propagate, but it is very easy to set up.
Tagged with:  

TechEd 2013 Here I come!

On April 11, 2013, in Conferences, TechEd 2013, by admin

Who’s got two thumbs and is going to TechEd 2013?

thumbs

This guy!

Look for daily (I say that now) updates from the Big Easy June 3-6.

Tagged with:  

Azure provides a whole host of options to store your data. The most common one that everyone knows and loves is Azure SQL. But there are other options available such as table storage or blob storage. For now we are just going to focus on the table storage.

What is Table Storage?

Table storage is the Azure NoSQL offering. For those not familiar with NoSQL it is not the same as a traditional relational database. There is no relationship between different elements. I’ve heard it described as kind of like excel. None of the rows are really related to each other in any hard and fast manner, and each row can look very different from every other row. I think that is an excellent explanation.

Why Table Storage?

Simplicity. Sometimes you just need an easy way to store some data, and a full-fledged RDMS is just too much. You can put whatever you want in Azure table storage without any worry that it doesn’t really fit in a table.

How to use it

The first step would to create you storage container in Azure management portal. By selecting New -> Data Services -> Storage -> Quick Create and then enter the name for you storage account. Below you can see I created an account named ‘tothecloud’

Add new storage account

We will need one other thing from the Azure management studio and that is an access key. Think of your storage account name (tothecloud) and the access key as username and password to get at your storage account. Once your storage account is created and you click on the account to go to the dashboard, at the bottom you should see a button to manage access keys. If you click on that button you will get a screen that looks like the following

Manage Access Keys

Normally you would not share these with the world, but by the time you read this I will have deleted and or regenerated the keys. Both of these keys are available to use can be used interchangeably. They give you two in case the first one gets compromised you can substitute the new one, and then regenerate the compromised one. They really don’t recommend regenerating both at the same time so that there is no down time.

And that is pretty much all you have to do from the portal. From here we are going to be switching to our C# code.
First let’s look how to put a record in table storage and then we’ll look at how to get things out.

Adding to table storage

  StorageCredentials creds = new StorageCredentials("tothecloud", "<your storage account key>");
  CloudStorageAccount account  = new CloudStorageAccount(creds, false);

  CloudTableClient tableClient = account.CreateCloudTableClient();
 
  CloudTable table = tableClient.GetTableReference("people");
  table.CreateIfNotExists();
 
  Person p = new Person {
     PersonId = 1,
     Name = "tom",
     Age = 28,
     PartitionKey = "people",
     RowKey = "1"
  };
 
   TableOperation insertOp = TableOperation.Insert(p);
   table.Execute(insertOp);

In this example we are adding to our table storage account a Person object. Our person object isn’t anything fancy except that it implements the ITableEntity interface. Which requires, among other things, PartitionKey and RowKey which are needed by Table storage to find the object you put in there. This must be a unique pair. You cannot have duplicates or Azure Table storage will throw an error. Now let’s take a quick walk through the code.

We start off by declaring some credentials using our storage account name, and our account key that we copied earlier. We then take those creds and create a new CloudStorageAccount, the second parameter denotes whether or not we want to use https. We then, using our account, create a cloud table client, and then get a reference to our table named people. If the table doesn’t exist we go ahead create it. After defining our person object, we create a table operation for the insert and execute it against the table. Very simple. I didn’t define any schema at all in my table storage container.

Now that we have something in there, how do we get it out?

Selecting From Table Storage

   StorageCredentials creds = new StorageCredentials("tothecloud", "<your storage account key>");
   CloudStorageAccount account = new CloudStorageAccount(creds, false);
   CloudTableClient tableClient = account.CreateCloudTableClient();

   CloudTable table = tableClient.GetTableReference("people");
   table.CreateIfNotExists();

   TableQuery<Person> query = new TableQuery<Person>().Where(TableQuery.GenerateFilterCondition("PartitionKey", QueryComparisons.Equal, "people")); 

   IEnumerable<Person> results = table.ExecuteQuery(query);

Just as in the first example we get our credentials, account, and table. After that things get a little interesting. We are creating a TableQuery based on person, and you can use whatever property of person that you want. In this case I want to get everything from the people table. So I’m taking everything where the PartitionKey equals people. After I have my query, I run that against my table and get back an enumerable list of People. From there you can do work with the set as you normally would.

Limitations

As with any solution Azure table storage does have some limitations. This isn’t a full fledged relational database. You can relate items in your container, however there is no hard link. So if you need real foreign keys, then you may want to look at SQL Azure. Along that same line there are no indexes, so some queries can take some time to complete when you have a lot of data in a partition.

Another limitation is that the objects that you are uploading into Azure Table storage have to be flat. You can have an object of simple types, but you cannot have complex types as a property of your object. So in my example above if I had another class named Address. I could not use that inside my person object. So everything has to be flat, which is a bit different from other NoSQL options.

So just like any technology make sure you weigh your pros and cons.

Summary

  • Azure Table Storage is the Azure NoSQL option for cloud storage
  • Azure Table storage is a fast and easy way to get storage for your application
  • There are no foreign keys or indexes in table storage
  • All objects must be flat
  • You can get all the Azure dlls from nuget
  • You can sign up for a Windows Azure account by clicking the link on the right!

There are other APIs available for a lot of other languages other than .Net out there that will allow you to take advantage of Azure table storage and all it has to offer. Many of these APIs simply wrap the REST table storage interface.

This post is also posted at TCSC.com

Tagged with: