Getting Started with Azure Functions and F#

While it's been possible to use F# in Azure Functions for some time now, it wasn't until this week that it really became a first class citizen. Previously it would execute your F# scripts by calling out to fsi, but now the runtime is fully available, including input and output bindings, making it a far more compelling option.

I recently built a somewhat complex "serverless" application using AWS Lambda and JavaScript, thinking to myself the entire time that I wished I could have been writing it in F#. In this world of event-driven functions a language like F# really shines, so I'm excited to see Microsoft embrace supporting it in Azure Functions. In this post I'll walk through creating a simple Azure Functions application in F# that takes in a URL for an image, runs it through Microsoft's Cognitive Services Emotion API, and overlays each face with an emoji that matches the detected emotion. This started out as an attempt to replicate Scott Hanselman's demo in F#, but then I figured I may as well take it a step further while I was in there.

Initial Setup

While you can do a lot through the editor inside the Azure portal, for this demo I'm going to walk through creating an application that uses source control to handle deployments, since this is closer to what you'd be doing for any real application.

If you haven't installed it already, you'll want to install the azurefunctions npm package:

npm i -g azurefunctions

This is a nice CLI tool the Azure Functions team maintains to help build and manage functions. I will also note that as of right now these things are all in a preview state and a bit of a moving target, so the experience isn't without a few rough edges currently. I have no doubts these will be smoothed out over time.

With that installed, run func init to create a new Git repository with some initial files:

C:\code\github\gshackles\facemoji> func init
Writing .gitignore
Writing host.json
Writing .secrets
Initialized empty Git repository in C:/code/github/gshackles/facemoji/.git/


Tip: run func new to create your fSirst function.

Next, commit that to your repository and push that out somewhere. In my case, I'm using GitHub.

In the Azure portal, go ahead and create a new Function App, and then under its settings choose to configure continuous integration. Connect the app to the Git repository you just created, which will allow Azure to automatically deploy your functions anytime you push.

Create The Function

Now we can actually start creating our function! From the command line, run func new:

C:\code\github\gshackles\facemoji [master +3 ~0 -0 !]> func new

     _-----_
    |       |    ╭──────────────────────────╮
    |--(o)--|    │   Welcome to the Azure                      │
   `---------´   │   Functions generator!                      │
    ( _´U`_ )    ╰──────────────────────────╯
    /___A___\   /
     |  ~  |
   __'.___.'__
 ´   `  |° ´ Y `

? Select an option... List all templates
There are 50 templates available
? Select from one of the available templates... QueueTrigger-FSharp
? Enter a name for your function... facemoji
Creating your function facemoji...
Location for your function...
C:\code\github\gshackles\facemoji\facemoji


Tip: run `func run <functionName>` to run the function.

This is one of those rough edges I mentioned - as of right now the only F# template in this tool is QueueTrigger-FSharp so we'll choose that, even though it doesn't match what we're actually going to do. I'm sure this will be updated very soon with more up to date options.

In our case we're going to use HTTP input and output instead of being driven by a queue, so update the contents of function.json to:

{
  "bindings": [
    {
      "type": "httpTrigger",
      "name": "req",
      "authLevel": "anonymous",
      "direction": "in"
    },
    {
      "type": "http",
      "name": "res",
      "direction": "out"
    }
  ],
  "disabled": false
}

We can also go ahead and add a project.json file to declare some NuGet dependencies:

{
    "frameworks": {
        "net46": {
            "dependencies": {
                "FSharp.Data": "2.3.2",
                "Newtonsoft.Json": "9.0.1"
            }
        }
    }
}

You'll also want to copy in the PNG files found in my GitHub repository as well. Finally, go into your app settings and add a setting named EmotionApiKey with a value of the key you get from Cognitive Services.

Implement the Function

Okay, with all that out of the way, let's actually implement this thing! The implementation of the function will go in run.fsx. Since this is F# we will build things out from top to bottom as small functions we can compose together. First we can pull in some references we'll need:


#r "System.Drawing"

open System
open System.IO
open System.Net
open System.Net.Http.Headers
open System.Drawing
open System.Drawing.Imaging
open FSharp.Data
open Newtonsoft.Json

Next, create a few types to match the Cognitive Services API models and pull in some environment variables:

type FaceRectangle = { Height: int; Width: int; Top: int; Left: int; }
type Scores = { Anger: float; Contempt: float; Disgust: float; Fear: float;
                Happiness: float; Neutral: float; Sadness: float; Surprise: float; }
type Face = { FaceRectangle: FaceRectangle; Scores: Scores }

let apiKey = Environment.GetEnvironmentVariable("EmotionApiKey")
let appPath = Path.Combine(Environment.GetEnvironmentVariable("HOME"), "site", "wwwroot", "facemoji")

Originally I had wanted to use the JSON type provider to avoid needing Json.NET and these models but I ran into some issues there, another rough edge I suspect will be ironed out.

Next, we'll need to parse the query string of the request sent to us, grab the image URL from it, and download the image into a byte array:

let getImageUrl (req: HttpRequestMessage) =
    req.GetQueryNameValuePairs()
    |> Seq.find(fun pair -> pair.Key.ToLowerInvariant() = "url")
    |> fun pair -> pair.Value

let getImage url = 
    Http.Request(url, httpMethod = "GET")
    |> fun (imageResponse) -> 
        match imageResponse.Body with
        | Binary bytes -> bytes
        | _ -> failwith "expected binary response but received text"

With the image downloaded, we can send it to Cognitive Services to have it analyzed:

let getFaces bytes =
    Http.RequestString("https://api.projectoxford.ai/emotion/v1.0/recognize",
        httpMethod = "POST",
        headers = [ "Ocp-Apim-Subscription-Key", apiKey ],
        body = BinaryUpload bytes)
    |> fun (json) -> JsonConvert.DeserializeObject<Face[]>(json)

Now that we have a list of faces in the image, we need to determine which emoji to show for each one:

let getEmoji face =
    match face.Scores with
        | scores when scores.Anger > 0.1 -> "angry.png"
        | scores when scores.Fear > 0.1 -> "afraid.png"
        | scores when scores.Sadness > 0.1 -> "sad.png"
        | scores when scores.Happiness > 0.5 -> "happy.png"
        | _ -> "neutral.png"
    |> fun filename -> Path.Combine(appPath, filename)
    |> Image.FromFile

So now we have an image, a list of faces, and an accurate emoji to use for each. Let's tie those together and draw the emoji on the image, returning a new image byte array:

let drawImage (bytes: byte[]) faces =
    use inputStream = new MemoryStream(bytes)
    use image = Image.FromStream(inputStream)
    use graphics = Graphics.FromImage(image)
    
    faces |> Array.iter(fun face ->
        let rect = face.FaceRectangle
        let emoji = getEmoji face
        graphics.DrawImage(emoji, rect.Left, rect.Top, rect.Width, rect.Height)
    )

    use outputStream = new MemoryStream();
    image.Save(outputStream, ImageFormat.Jpeg)
    outputStream.ToArray()

Now we just need to return that image as an HTTP response:

let createResponse bytes =
    let response = new HttpResponseMessage()
    response.Content <- new ByteArrayContent(bytes)
    response.StatusCode <- HttpStatusCode.OK
    response.Content.Headers.ContentType <- MediaTypeHeaderValue("image/jpeg")
    
    response

That's all the plumbing we need here for our function, so all that's left is to define the Run method that Azure Functions will actually invoke:

let Run (req: HttpRequestMessage) =  
    let bytes = getImage <| getImageUrl req
    
    getFaces bytes
    |> drawImage bytes
    |> createResponse

In less than 80 lines of code we're taking a URL input, downloading an image, detecting faces and emotions, drawing emoji over each face, and returning the new image as an HTTP response. Let's try it out!

Results

Let's start out with an image that's clearly full of anger:

Anger

Okay, let's counter that with a nice happy train:

Happy

Nobody has ever known sadness quite like Jon Snow:

Sadness

And finally, Kevin McCallister to test out fear:

Fear

Not bad!

Not bad

All of the code for this app is available on GitHub.

comments powered by Disqus
Navigation