myPrayerJournal: Category Archive (Page 2)

Posts about myPrayerJournal | https://prayerjournal.me

Saturday, September 1, 2018
  A Tour of myPrayerJournal: Documentation

NOTES:

  • This is post 7 in a series; see the introduction for all of them, and the requirements for which this software was built.
  • Links that start with the text “mpj:” are links to the 1.0.0 tag (1.0 release) of myPrayerJournal, unless otherwise noted.

We have spent a lot of time looking at various aspects of the application, written from the perspective of a software developer. However, there is another perspective to consider - that of a user. With a “minimalist” app, how do we let them know what the buttons do?

Setting Up GitHub Pages

In other projects, we had use GitHub Pages by creating a gh-pages branch of our repository. This has some advantages, such as a page structure that is completely separate from the source files. At the same time, though, you either have to have 2 sandboxes on different branches, or switch branches when switching from coding to documentation. Fortunately, this is not the only option; GitHub Pages can publish from a /docs folder on the master branch as well. They have a nice guide on how to set it up.

The _config.yml file that's also in the /docs folder (mpj:docs folder) was put there by GitHub when we selected the theme from the Settings page of the repo. The only file we ever put there was index.md (mpj:docs/index.md).

Writing the Documentation

As GitHub Pages uses Jekyll behind the scenes, we have Markdown and the default Liquid tags available to us. We didn't need any Liquid tags, though, as the documentation is pretty straightforward. You may notice that there isn't any front matter at the top of index.md; absent that, GitHub Pages selects the title of the page from the first h1 tag in the document, defined with a leading # sequence.

If you browse the commit history for index.md, you'll see that many of the commits reference either a version or issue number. This makes it rather simple to go back in time through the source code, and not only review the code, but see how its functionality was explained in the documentation. You can also review typos that got committed, which helps keep you humble. :)

Making It Better

There is more that could be done to improve this aspect of the project. The first would be to assign it a subdomain of prayerjournal.me instead of serving the pages from bit-badger.github.io. GitHub Pages does support custom subdomains, and even supports HTTPS for these through Let's Encrypt. The second would be to work up a lightweight theme for the page that matched the look-and-feel of the main site; this would make a more unified experience. Finally, while “minimalist” was the goal, there ended up being a few features that took lots of words to explain; a table of contents on the page would help people navigate directly to the help they need.

 

Our tour is drawing to a close; next time, we'll wrap it up with observations and opinions over the development process.

Categorized under ,
Tagged , , , , , , ,

Friday, August 31, 2018
  A Tour of myPrayerJournal: The Data Store

NOTES:

  • This is post 6 in a series; see the introduction for all of them, and the requirements for which this software was built.
  • Links that start with the text “mpj:” are links to the 1.0.0 tag (1.0 release) of myPrayerJournal, unless otherwise noted.

Up to this point in our tour, we've talked about data a good bit, but it has all been in the context of whatever else we were discussing. Let's dig into the data structure a bit, to see how our information is persisted and retrieved.

Conceptual Design

The initial thought was to create a document store with one document type, the request. The request would have an ID, the ID of the user who created it, and an array of updates/records. Through the initial phases of development, our preferred document database (RethinkDB) was going through a tough period, with their company shutting down; thankfully, they're now part of the Linux Foundation, so they're still around. RethinkDB supports calculated fields in documents, so the plan was to have a few of those to keep us from having to retrieve or search through the array of updates.

We also considered a similar design using PostgreSQL's native JSON support. While it does not natively support calculated fields, a creative set of indexes could also suffice. As we thought it through a little more, though, this seemed to be over-engineering; this isn't unstructured data, and PostgreSQL handles max-length character fields very well. (This is supposed to be a “minimalist” application, right?) A relational structure would fit our needs quite nicely.

The starting design, then, used 2 tables. request had an ID and a user ID; history had the request ID, an “as of” date, a status (created, updated, etc.), and the optional text associated with that update. Early in development, the journal view brought together the request/user IDs along with the latest history entry that affected the text of the request, as well as the last date/time an action had occurred on the request. When the notes capability was added, it got its own note table; its structure was similar to the history table, but with non-optional text and without a status. As snoozing and recurrence capabilities were added, those fields were added to the request table (and the journal view).

The final design uses 3 tables, 2 of which have a one-to-many relationship with the third; and 1 view, which provides the calculated fields we had originally planned for RethinkDB to calculate.

Database Changes (Migrations)

As we ended up using 3 different server environments over the course of this project, we ended up writing a DbContext class based on our existing structure. For the Node.js backend, we created a DDL file (mpj:ddl.js, v0.8.4+) that checked for the existence of each table and view, and also had the SQL to execute if the check failed. For the Go version (mpj:data.go, v0.9.6+), the EnsureDB function does a similar thing; looking at line 347, it is checking for a specific column in the request table, and running the ALTER TABLE statement to add it if it isn't there.

The only change that was required since the F#/Giraffe backend has been in place was the one to support request recurrence. Since we did not end up with a scaffolded EF Core initial migration/model, we simply wrote a SQL script to accomplish these changes (mpj:sql directory).1

The EF Core Model

EF Core uses the familiar DbContext class from prior versions of Entity Framework. myPrayerJournal does take advantage of a feature that just arrived in EF Core 2.1, though - the DbQuery type. DbSets are collections of entities that generally map to an underlying database table. They can be mapped to views, but unless it's an updateable view, updating those entities results in a runtime error; plus, since they can't be updated, there's no need for the change tracking mechanism to care about the entities returned. DbQuery addresses both these concerns, providing lightweight read-only access to data from views.

The DbContext class is defined in Data.fs (mpj:Data.fs), starting in line 189. It's relatively straightforward, though if you have only ever seen a C# model, it's a bit different. The combination of val mutable x : [type] and the [<DefaultValue>] attribute are the F# equivalent of C#'s [type] x; declaration, which creates a variable and initializes reference types to null. The EF Core runtime provides these instances to their setters (lines 203, 206, 209, and 212), and the application code uses them via the getters (a line earlier, each).

The OnModelCreating overridden method (line 214) is called when the runtime first creates its instance of the data model. Within this method, we call the .configureEF function of each of our database types. The name of this function isn't prescribed, and we could define the entire model without even referencing the data types of our entities; however, this technique gives us a “configure where it's defined” paradigm with each entity type. While the EF “Code First” model creates tables that don't need a lot of configuring, we must provide more information about the layout of the database tables since we're writing a DbContext to target an existing database.

Let's start out by taking a look at History.configureEF (line 50). Line 53 says that we're going to the table history. This seems to be a no-brainer, but EF Core would (by convention) be expecting a History table; since PostgreSQL uses a different syntax for case-sensitive names, these queries would look like SELECT ... FROM "History" ..., resulting in a nice “relation does not exist” error. Line 54 defines our compound key (requestId and asOf). Lines 55-57 define certain properties of the entity as required; if we try to store an entity where these fields are not set, the runtime will raise an exception before even trying to take it to the database. (F#'s non-nullability makes this a non-issue, but it still needs to be defined to match the database.) Line 58 may seem to do nothing, but what it does is make the text property immediately visible to the model builder; then, we can define an OptionConverter<string>2 for it, which will translate between null and string option (None = null, Some [x] = [x]). (Lines 60-61 are left over from when I was trying to figure out why line 62 was raising an exception, leading to the addition of line 58; they could safely be removed, and will be for a post-1.0 release.)

History is the most complex configuration, but let's take a peek at Request.configureEF (line 126) to see one more interesting technique. Lines 107-110 define the history and notes collections on the Request type; lines 138-145 define the one-to-many relationship (without a foreign key entity in the child types). Note the casts to IEnumerable<x> (lines 138 and 142) and obj (lines 140 and 144); while F# is good about inferring types in a lot of cases, these functions are two places it is not. We can use the :> operator for the cast, because these types are part of the inheritance chain. (The :?> operator is used for potentially unsafe casts.)

Finally, the attributes above each record type need a bit of explanation; each one has [<CLIMutable; NoComparison; NoEquality>]. The CLIMutable attribute creates a no-argument constructor for the record type, which the runtime can use to create instances of the type. (The side effect is that we may get null instances of what is expected to be a non-null type, but we'll look at dealing with that a bit later.) The NoComparison and NoEquality attributes keep F# from creating field-level equality and comparison methods on the types. While these are normally helpful, there is an edge case where they can raise NullReferenceExceptions, especially when used on null instances. As these record types are simply our data transfer objects (both from SQL and to JSON), we don't need the functionality anyway.

Reading and Writing Data

EF Core uses the “unit of work” pattern with its DbContext class. Each instance maintains knowledge of the entities it's loaded, and does change tracking against those entities, so it knows what commands to issue when .SaveChanges() (or .SaveChangesAsync()) is called. It doesn't do this for free, though, and while EF Core does this much more efficiently than Entity Framework proper, F# record types do not support mutation; if req is a Request instance, for example, { req with showAfter = 123456789L } returns a new Request instance.

This is the problem whose solution is enabled by lines 227-233 in Data.fs. We can manually register an instance of an entity as either added or modified, and when we call .SaveChanges(), the runtime will generate the SQL to update the data store accordingly. This also allows us to use .AsNoTracking() in our queries (lines 250, 258, 265, and 275), which means that the resultant entities will not be registered with the change tracker, saving that overhead. Notice that we don't specify that on line 243; since Journal is defined as a DbQuery instead of a DbSet, we get change-tracking-avoidance for free.

Generally speaking, the preferred method of writing queries against a DbContext instance is to define extension methods against it. These are static by default, and they enable the context to be as lightweight as possible, while extending it when necessary. However, since this context is so small, we've created 6 methods on the context that we use to obtain data.

If you've been reading along with the tour, we have already seen a few API handler functions (mpj:Handlers.fs) that use the data context. Line 137 has the handler for /api/journal, the endpoint to retrieve a user's active requests. It uses .JournalByUserId(), defined in Data.fs line 242, whose signature is string -> JournalRequest seq. (The latter is an F# alias for IEnumerable<JournalRequest>.) Back in the handler, we use db ctx to get the context (more on that below), then call the method; we're piping the output of userId ctx into it, so it gets its lone parameter from the pipe, then its output is piped to the asJson function we discussed as part of the API.

Line 192, the handler for /api/request/[id]/history, demonstrates both inserting and updating data. We attempt to retrieve the request by its ID and the user ID; if that fails, we return a 404. If it succeeds, though, we add a history entry (lines 201-207), and optionally update the showAfter field of the request based on its recurrence. Finally, the call on line 212 commits the changes for this particular instance. Since the .SaveChanges[Async]() methods return the number of records affected, we cannot use the do! operator for this; F# makes you explicitly ignore values you aren't either returning or assigning to a name. However, defining _ as a parameter or name demonstrates that we realize there is a value to be had, we just are not going to do anything with it.

We mentioned that CLIMutable record types could be null. Since record types cannot normally be null, we cannot code something like match [var] with null -> ...; it's a compiler syntax error. What we can do, though, is use the box operator. box “boxes” whatever value we have into an object container, where we can then check it against null. The function toOption in Data.fs on line 11 does this work for us; throughout the retrieval methods, we use it to return options for items that are either present or absent. This is why we could do the match statement in the /api/request/[id]/history handler against Some and None values.

Getting a DbContext

Since Giraffe sits atop ASP.NET Core, we use the same technique; we use the .AddDbContext() extension method on the IServiceCollection interface, and assign it when we set up the dependency injection container. In our case, it's in Program.fs (mpj:Program.fs) line 50, where we also direct it to use a PostgreSQL connection defined by the connection string “mpj”. (This comes from the unified configuration built from appsettings.json and appsettings.[Environment].json.) If we look back at Handlers.fs, lines 45-47, we see the definition of the db ctx call we used earlier. We're using the Giraffe-provided GetService<'T>() extension method to return this instance.

 

Our tour is nearing its end, but we still have a few stops to go. Next time, we'll look at how we generated documentation to tell people how to use this app.


1 Writing this post has shown me that I need to either create a SQL creation script for the repo, or create an EF Core initial migration/model, so the database ever has to be recreated from scratch. It's good to write about things after you do them!

2 This is also a package I wrote; it's available on NuGet, and I also wrote a post about what it does.

Categorized under , , , ,
Tagged , , , , , , , , , , , , , , , , , , , , ,

Thursday, August 30, 2018
  A Tour of myPrayerJournal: Authentication

NOTES:

  • This is post 5 in a series; see the introduction for all of them, and the requirements for which this software was built.
  • Links that start with the text “mpj:” are links to the 1.0.0 tag (1.0 release) of myPrayerJournal, unless otherwise noted.

At this point in our tour, we're going to shift to a cross-cutting concern for both app and API - authentication. While authentication and authorization are distinct concerns, the authorization check in myPrayerJournal is simply “Are you authenticated?” So, while we'll touch on authorization, and it will seem like a synonym for authentication, remember that they would not be so in a more complex application.

Deciding on Auth0

Auth0 provides authentication services; they focus on one thing, and getting that one thing right. They support simple username/password authentication, as well as integrations with many other providers. As “minimalist” was one of our goals, not having to build yet another user system was appealing. As an open source project, Auth0 provides these services at no cost. They're the organization behind the JSON Web Token (JWT) standard, which allows base-64-encoded, encrypted JSON to be passed around as proof of identity.

This decision has proved to be a good one. In the introduction, we mentioned all of the different frameworks and server technologies we had used before settling on the one we did. In all but one of these “roads not further traveled”1, authentication worked. They have several options for how to use their service; you can bring in their library and host it yourself, you can write your own and make your own calls to their endpoints, or you can use their hosted version. We opted for the latter.

Integrating Auth0 in the App

JavaScript seems to be Auth0's primary language. They provide an npm package to support using the responses that will be returned from their hosted login page. The basic flow is:

  • The user clicks a link that executes Auth0's authorize() function
  • The user completes authorization through Auth0
  • Auth0 returns the result and JWT to a predefined endpoint in the app
  • The app uses Auth0's parseHash() function to extract the JWT from the URL (a GET request)
  • If everything is good, establish the user's session and proceed

myPrayerJournal's implementation is contained in AuthService.js (mpj:AuthService.js). There is a file that is not part of the source code repository; this is the file that contains the configuration variables for the Auth0 instance. Using these variables, we configure the WebAuth instance from the Auth0 package; this instance becomes the execution point for our other authentication calls.

Using JWTs in the App

We'll start easy. The login() function simply exposes Auth0's authorize() function, which directs the user to the hosted log on page.

The next in logical sequence, handleAuthentication(), is called by LogOn.vue (mpj:LogOn.vue) on line 16, passing in our store and the router. (In our last post, we discussed how server requests to a URL handled by the app should simply return the app, so that it can process the request; this is one of those cases.) handleAuthentication() does several things:

  • It calls parseHash() to extract the JWT from the request's query string.
  • If we got both an access token and an ID token:
    • It calls setSession(), which saves these to local storage, and schedules renewal (which we'll discuss more in a bit).
    • It then calls Auth0's userInfo() function to retrieve the user profile for the token we just received.
    • When that comes back, it calls the store's (mpj:store/index.js) USER_LOGGED_ON mutation, passing the user profile; the mutation saves the profile to the store, local storage, and sets the Bearer token on the API service (more on that below as well).
    • Finally, it replaces the current location (/user/log-on?[lots-of-base64-stuff]) with the URL /journal; this navigates the user to their journal.
  • If something didn't go right, we log to the console and pop up an alert. There may be a more elegant way to handle this, but in testing, the only way to reliably make this pop up was to mess with things behind the scenes. (And, if people do that, they're not entitled to nice error messages.)

Let's dive into the store's USER_LOGGED_ON mutation a bit more; it starts on line 68. The local storage item and the state mutations are pretty straightforward, but what about that api.setBearer() call? The API service (mpj:api/index.js) handles all the API calls through the Axios library. Axios supports defining default headers that should be sent with every request, and we'll use the HTTP Authorization: Bearer [base64-jwt] header to tell the API what user is logged in. Line 18 sets the default authorization header to use for all future requests. (Back in the store, note that the USER_LOGGED_OFF mutation (just above this) does the opposite; it clears the authorization header. The logout() function in AuthService.js clears the local storage.)

At this point, once the user is logged in, the Bearer token is sent with every API call. None of the components, nor the store or its actions, need to do anything differently; it just works.

Maintaining Authentication

JWTs have short expirations, usually expressed in hours. Having a user's authentication go stale is not good! The scheduleRenewal() function in AuthService.js schedules a behind-the-scenes renewal of the JWT. When the time for renewal arrives, renewToken() is called, and if the renewal is successful, it runs the result through setSession(), just as we did above, which schedules the next renewal as its last step.

For this to work, we had to add /static/silent.html as an authorized callback for Auth0. This is an HTML page that sits outside of the Vue app; however, the usePostMessage: true parameter tells the renewal call that it will receive its result from a postMessage call. silent.html uses the Auth0 library to parse the hash and post the result to the parent window.2

Using JWTs in the API

Now that we're sending a Bearer token to the API, the API can tell if a user is logged in. We looked at some of the handlers that help us do that when we looked at the API in depth. Let's return to those and see how that is.

Before we look at the handlers, though, we need to look at the configuration, contained in Program.fs (mpj:Program.fs). You may remember that Giraffe sits atop ASP.NET Core; we can utilize its JwtBearer methods to set everything up. Lines 38-48 are the interesting ones for us; we use the UseAuthentication extension method to set up JWT handling, then use the AddJwtBearer extension method to configure our specific JWT values. (As with the app, these are part of a file that is not in the repository.) The end result of this configuration is that, if there is a Bearer token that is a valid JWT, the User property of the HttpContext has an instance of the ClaimsPrincipal type, and the various properties from the JWT's payload are registered as Claims on that user.

Now we can turn our attention to the handlers (mpj:Handlers.fs). authorize, on line 72, calls user ctx, which is defined on lines 50-51. All this does is look for a claim of the type ClaimTypes.NameIdentifier. This can be non-intuitive, as the source for this is the sub property from the JWT3. A valid JWT with a sub claim is how we tell we have a logged on user; an authenticated user is considered authorized.

You may have noticed that, when we were describing the entities for the API, we did not mention a User type. The reason for that is simple; the only user information it stores is the sub. Requests are assigned by user ID, and the user ID is included with every attempt to make any change to a request. This eliminates URL hacking or rogue API posting being able to get anything meaningful from the API.

The userId function, just below the user function, extracts this claim and returns its value, and it's used throughout the remainder of Handlers.fs. add (line 160) uses it to set the user ID for a new request. addHistory (line 192) and addNote (line 218) both use the user ID, as well as the passed request ID, to try to retrieve the request before adding history or notes to it. journal (line 137) uses it to retrieve the journal by user ID.

 

We now have a complete application, with the same user session providing access to the Vue app and tying all API calls to that user. We also use it to maintain data security among users, while truly outsourcing all user data to Microsoft or Google (the two external providers currently registered). We do still have a few more stops on our tour, though; the next is the back end data store.


1 Sorry, Elm; it's not you, it's me…

2 This does work, but not indefinitely; if I leave the same browser window open from the previous day, I still have to sign in again. I very well could be “doing it wrong;” this is an area where I probably experienced the most learning through creating this project.

3 I won't share how long it took me to figure out that sub mapped to that; let's just categorize it as “too long.” In my testing, it's the only claim that doesn't come across by its JWT name.

Categorized under , , ,
Tagged , , , , , , , ,

Wednesday, August 29, 2018
  A Tour of myPrayerJournal: The API

NOTES:

  • This is post 4 in a series; see the introduction for all of them, and the requirements for which this software was built.
  • Links that start with the text “mpj:” are links to the 1.0.0 tag (1.0 release) of myPrayerJournal, unless otherwise noted.

Now that we have a wonderful, shiny, reactive front end, we need to be able to get some data into it. We'll be communicating via JSON between the app and the server. In this post, we'll also attempt to explain some about the F# language features used as part of the API.

The Data

The entities are defined in Data.fs (mpj:Data.fs). We'll dig into them more fully in the “data store” post, but for now, we'll just focus on the types and fields. We have four types: History (lines 33-39), Note (lines 67-71), Request (lines 94-110), and JournalRequest (lines 153-173). A Request can have multiple Notes and History entries, and JournalRequest is based on a view that pulls these together and computes things like the current text of the request and when it should be displayed again.

We apply no special JSON transformations, so the fields in these record types are the properties in the exported JSON.

The URLs

To set the API apart from the rest of the URLs, they all start with /api/. Request URLs generally follow the form request/[id]/[action], and there is a separate URL for the journal. Line 54 in Program.fs (mpj:Program.fs) has the definition of the routes. We used Giraffe's Token Router instead of the traditional one, as we didn't need to support any URL schemes it doesn't. The result really looks like a nice, clean “table of contents” for the routes support by the API.1

We aren't done with routes just yet, though. Let's take a look at that notFound handler (mpj:Handlers.fs); it's on line 27. Since we're serving a SPA, we need to return index.html, the entry point of the SPA, for URLs that belong to it. Picture a user sitting at https://prayerjournal.me/journal and pressing “Refresh;” we don't want to return a 404! Since the app has a finite set of URL prefixes, we'll check to see if one of those is the URL. If it is, we send the Vue app; if not, we send a 404 response. This way, we can return true 404 responses for the inevitable hacking attempts we'll receive (pro tip, hackers - /wp-admin/wp-upload.php does not exist).

Defining the Handlers

Giraffe uses the term “handler” to define a function that handles a request. Handlers have the signature HttpFunc -> HttpContext -> Task<HttpContext option> (aliased as HttpHandler), and can be composed via the >=> (“fish”) operator. The option part in the signature is the key in composing handler functions. The >=> operator creates a pipeline that sends the output of one function into the input of another; however, if a function fails to return a Some option for the HttpContext parameter, it short-circuits the remaining logic.2

The biggest use of that composition in myPrayerJournal is determining if a user is logged in or not. Authorization is also getting its own post, so we'll just focus on the yes/no answer here. The authorized handler (line 71) looks for the presence of a user. If it's there, it returns next ctx, where next is the next HttpFunc and ctx is the HttpContext it received; this results in a Task<HttpContext option> which continues to process, hopefully following the happy path and eventually returning Some. If the user is not there, though, it returns the notAuthorized handler, also passing next and ctx; however, if we look up to line 67 and the definition of the notAuthorized handler, we see that it ignores both next and ctx, and returns None. However, notice that this handler has some fish composition in it; setStatusCode returns Some (it has succeeded) but we short-circuit the pipeline immediately thereafter.

We can see this in use in the handler for the /api/journal endpoint, starting on line 137. Both authorize and the inline function below it have the HttpHandler signature, so we can compose them with the >=> operator. If a user is signed in, they get a journal; if not, they get a 403.

When a handler is expecting a parameter from a route, the handler's signature is a bit different. The handler for /api/request/[id], on line 246, has an extra parameter, reqId. If we look back in Program.fs line 64, we see where this handler is assigned its route; and, if you compare it to the route for /api/journal on line 59, you'll see that it looks the same. The difference here is the expectations of route (for the journal) vs. routef (for the request). route expects no parameter, but routef will extract the parameters, Printf style, and pass them as parameters that precede the HttpHandler signature.

Executing the Handlers

myPrayerJournal has GET, POST, and PATCH routes defined. We'll look at representative examples of each of these over the next few paragraphs.

For our GET example, let's return to the Request.get handler, starting on line 246. Once we clear the authorize hurdle, the code attempts to retrieve a JournalRequest by the request ID passed to the handler and the user ID passed as part of the request. If it exists, we return the JSON representation of it; if not, we return a 404. Note the order of parameters to the json result handler - it's json [object] next ctx (or json [object] HttpHandler). We also defined an asJson function on line 75; this flips the parameters so that the output object is the last parameter (asJson next ctx [object]), enabling the flow seen in the journal handler on line 137.

For our POST example, we'll use Request.addNote, starting on line 218. It checks authorization, and to make sure the request exists for the current user. If everything is as it should be, it then uses Giraffe's BindJsonAsync<'T> extension method to create an instance of the expected input form. Since the handler doesn't have a place to specify the expected input object, this type of model binding cannot happen automatically; the upside is, you don't waste CPU cycles trying to do it if you don't need it. Once we have our model bound, we create a new Note, add it, then return a 201 response.

PATCH handlers are very similar; Request.snooze is one such handler, starting on line 291. The flow is very similar as the one for Request.addNote, except that we're updating instead of adding, and we return 204 instead of 201.

Configuring the Web Server

Giraffe enables Suave-like function composition on top of Kestrel and the ASP.NET Core infrastructure. Rather than using the Startup class, myPrayerJournal uses a functional configuration strategy. The calls are in Program.fs starting on line 115; there are lots of other guides on how to configure ASP.NET Core, so I won't say too much more about it.

Notice, though, line 31. Earlier, we discussed the >=> operator and how it worked. This is an example of the >> composition operator, which is part of F#. For functions whose output can be run directly into the next function's input, the >> operator allows those functions to be composed into a single function. If we were to look at the signature of the composed function within the parentheses, its signature would be string -> unit; so, we pass the string “Kestrel” to it, and it runs through and returns unit, which is what we expect for a configuration function. Here's how that breaks down:

  • ctx.Configuration.GetSection's signature is string -> IConfigurationSection
  • opts.Configure's signature is IConfiguration -> KestrelConfigurationLoader (IConfigurationSection implements IConfiguration)
  • ignore's signature is 'a -> unit

If this still doesn't make sense, perhaps this will help. The Configure.kestrel function could also have been written using the |> piping operator instead, with the equivalent code looking like:

  let kestrel (ctx : WebHostBuilderContext) (opts : KestrelServerOptions) =
    ctx.Configuration.GetSection "Kestrel"
    |> opts.Configure
    |> ignore

 

That concludes our tour of the API for now, though we'll be looking at it again next time, when we take a deeper dive into authentication and authorization using Auth0.


1 While we tried to follow REST principles in large part, the REST purists would probably say that it's not quite RESTful enough to claim the name. But, hey, we do use PATCH, so maybe we'll get partial credit…

2 Scott Wlaschin has a great post entitled “Railway Oriented Programming” that explains this concept in general, and the fish operator specifically. Translating his definition to Giraffe's handlers, the first function is switch1, the next parameter is switch2, and the HttpContext is the x parameter; instead of Success and Failure, the return type utilizes the either/or nature of an option being Some or None. If you want to understand what makes F# such a great programming model, you'll spend more time on his site than on The Bit Badger Blog.

Categorized under , ,
Tagged , , , , , , , , , , , , , , , , , , ,

Sunday, August 26, 2018
  A Tour of myPrayerJournal: State in the Browser

NOTES:

  • This is post 3 in a series; see the introduction for all of them, and the requirements for which this software was built.
  • Links that start with the text “mpj:” are links to the 1.0.0 tag (1.0 release) of myPrayerJournal, unless otherwise noted.

Flux (a pattern that originated at Facebook) defines state, as well as actions that can mutate that state. Redux is the most popular implementation of that pattern, and naturally works very well with React. However, other JavaScript framewoks use this pattern, as it ensures that state is managed sanely. (Well, the state is sane, but so is the developer!)

As part of Vue, the Vuex component is a flux implementation for Vue that provides a standard way of managing state. They explain it in much more detail, so if the concept is a new one, you may want to read their “What is Vuex?” page before you continue. Once you are ready, let's continue and take a look at how it's used in myPrayerJournal.

Defining the State

The store (mpj:store/index.js) exports a single new Vuex.Store instance, with its state property defining the items it will track, along with initial values for those items. This represents the initial state of the app, and is run whenever the browser is refreshed.

In looking at our store, there are 4 items that are tracked; two items are related to authentication, and two are related to the journal. As part of authentication (which will get a further exploration in its own post), we store the user's profile and identity token in local storage; the initial values for those items attempt to access those values. The two journal related items are simply initialized to an empty state.

Mutating the State

There are a few guiding principles for mutations in Vuex. First, they must be defined as part of the mutations property in the store; outside code cannot simply change one state value to another without going through a mutation. Second, they must be synchronous; mutations must be a fast operation, and must be accomplished in sequence, to prevent race conditions and other inconsistencies. Third, mutations cannot be called directly; mutations are “committed” against the current store. Mutations receive the current state as their first parameter, and can receive as many other parameters as necessary.

(Side note: although these functions are called “mutations,” Vuex is actually replacing the state on every call. This enables some really cool time-traveling debugging, as tools can replay states and their transformations.)

So, what do you do when you need to run an asynchronous process - like, say, calling an API to get the requests for the journal? These processes are called actions, and are defined on the actions property of the store. Actions receive an object that has the state, but it also has a commit property that can be used to commit mutations.

If you look at line 87 of store/index.js, you'll see the above concepts put into action1 as a user's journal is loaded. This one action can commit up to 4 mutations of state. The first clears out whatever was in the journal before, committing the LOADED_JOURNAL mutation with an empty object. The second sets the isLoadingJournal property to true via the LOADING_JOURNAL mutation. The third, called if the API call resolves successfully, commits the LOADED_JOURNAL mutation with the results. The fourth, called whether it works or not, commits LOADING_JOURNAL again, this time with false as the parameter.

A note about the names of our mutations and actions - the Vuex team recommends defining constants for mutations and actions, to ensure that they are defined the same way in both the store, and in the code that's calling it. This code follows their recommendations, and those are defined in action-types.js and mutation-types.js in the store directory.

Using the Store

So, we have this nice data store with a finite number of ways it can be mutated, but we have yet to use it. Since we looked at loading the journal, let's use it as our example (mpj:Journal.vue). On line 56, we wrap up the computed properties with ...mapState, which exposes data items from the store as properties on the component. Just below that, the created function calls into the store, exposed as $store on the component instance, to execute the LOAD_JOURNAL action.

The template uses the mapped state properties to control the display. On lines 4 and 5, we display one thing if the isLoadingJournal property is true, and another (which is really the rest of the template) if it is not. Line 12 uses the journal property to display a RequestCard (mpj:RequestCard.vue) for each request in the journal.

I mentioned developer sanity above; and in the last post, I said that the logic that has RequestCard making the decision on whether it should show, instead of Journal deciding which ones it should show, would make sense. This is where we put those pieces together. The Vuex store is reactive; when data from it is rendered into the app, Vue will update the rendering if the store changes. So, Journal simply displays a “hang on” note when the journal is loading, and “all the requests” once it's loaded. RequestCard only displays if the request should be displayed. And, the entire “brains” behind this is the thing that starts the entire process, the call to the LOAD_JOURNAL action. We aren't moving things around, we're simply displaying the state of things as they are!

Navigation (mpj:Navigation.vue) is another component that bases its display off state, and takes advantage of the state's reactivity. By mapping isAuthenticated, many of the menu items can be shown or hidden based on whether a user is signed in or not. Through mapping journal and the computed property hasSnoozed, the “Snoozed” menu link does not show if there are no snoozed requests; however, the first time a request from the journal is snoozed, this one appears just because the state changed.

This is one of the things that cemented the decision to use Vue for the front end2, and is one of my favorite features of the entire application. (You've probably picked up on that already, though.)

 

We've now toured our stateful front end; next time, we'll take a look at the API we use to get data into it.


1 Pun not originally intended, but it is now!

2 The others were the lack of ceremony and the Single File Component structure; both of those seem quite intuitive.

Categorized under , ,
Tagged , , , , , , , , , , ,

Saturday, August 25, 2018
  A Tour of myPrayerJournal: The Front End

NOTES:

  • This is post 2 in a series; see the introduction for all of them, and the requirements for which this software was built.
  • Links that start with the text “mpj:” are links to the 1.0.0 tag (1.0 release) of myPrayerJournal, unless otherwise noted.

Vue is a front-end JavaScript framework that aims to have very little boilerplate and ceremony, while still presenting a componentized abstraction that can scale to enterprise-level if required1. Vue components can be coded using inline templates or multiple files (splitting code and template). Vue also provides Single File Components (SFCs, using the .vue extension), which allow you to put template, code, and style all in the same spot; these encapsulate the component, but allow all three parts to be expressed as if they were in separate files (rather than, for example, having an HTML snippet as a string in a JavaScript file). The Vetur plugin for Visual Studio Code provides syntax coloring support for each of the three sections of the file.

Layout

Using the default template, main.js is the entry point; it creates a Vue instance and attaches it to an element named app. This file also supports registering common components, so they do not have to be specifically imported and referenced in components that wish to use them. For myPrayerJournal, we registered our common components there (mpj:main.js). We also registered a few third-party Vue components to support a progress bar (activated during API activity) and toasts (pop-up notifications).

App.vue is also part of the default template, and is the component that main.js attaches to the app elements (mpj:App.vue). It serves as the main template for our application; if you have done much template work, you'll likely recognize the familiar pattern of header/content/footer.

This is also our first look at an SFC, so let's dig in there. The top part is the template; we used Pug (formerly Jade) for our templates. The next part is enclosed in script tags, and is the script for the page. For this component, we import one additional component (Navigation.vue) and the version from package.json, then export an object that conforms to Vue's expected component structure. Finally, styles for the component are enclosed in style tags. If the scoped attribute is present on the style tag, Vue will generate data attributes for each element, and render the declared styles as only affecting elements with that attribute. myPrayerJournal doesn't use scoped styles that much; Vue recommends classes instead, if practical, to reduce complexity in the compiled app.

Also of note in App.js is the code surrounding the use of the toast component. In the template, it's declared as toast(ref='toast'). Although we registered it in main.js and can use it anywhere, if we put it in other components, they create their own instance of it. The ref attribute causes Vue to generate a reference to that element in the component's $refs collection. This enables us to, from any component loaded by the router (which we'll discuss a bit later), access the toast instance by using this.$parent.$refs.toast, which allows us to send toasts whenever we want, and have the one instance handle showing them and fading them out. (Without this, toasts would appear on top of each other, because the independent instances have no idea what the others are currently showing.)

Routing

Just as URLs are important in a regular application, they are important in a Vue app. The Vue router is a separate component, but can be included in the new project template via the Vue CLI. In App.vue, the router-view item renders the output from the router; we wire in the router in main.js. Configuring the router (mpj:router.js) is rather straightforward:

  • Import all of the components that should appear to be a page (i.e., not modals or common components)
  • Assign each route a path and name, and specify the component
  • For URLs that contain data (a segment starting with :), ensure props: true is part of the route configuration

The scrollBehavior function, as it appears in the source, makes the Vue app mimic how a traditional web application would handle scrolling. If the user presses the back button, or you programmatically go back 1 page in history, the page will return to the point where it was previously, not the top of the page.

To specify a link to a route, we use the router-link tag rather than a plain a tag. This tag takes a :to parameter, which is an object with a name property; if it requires parameters/properties, a params property is included. mpj:Navigation.vue is littered with the former; see the showEdit method in mpj:RequestCard.vue for the structure on the latter (and also an example of programmatic navigation vs. router-link).

Components

When software developers hear “components,” they generally think of reusable pieces of software that can be pulled together to make a system. While that isn't wrong, it's important to understand that “reusable” does not necessarily mean “reused.” For example, the privacy policy (mpj:PrivacyPolicy.vue) is a component, but reusing it throughout the application would be… well, let's just say a “sub-optimal” user experience.

However, that does not mean that none of our components will be reused. RequestCard, which we referenced above, is used in a loop in the Journal component (mpj:Journal.vue); it is reused for every request in the journal. In fact, it is reused even for requests that should not be shown; behavior associated with the shouldDisplay property makes the component display nothing if a request is snoozed or is in a recurrence period. Instead of the journal being responsible for answering the question "Should I display this request?", the request display answers the question "Should I render anything?". This may seem different from typical server-side page generation logic, but it will make more sense once we discuss state management (next post).

Looking at some other reusable (and reused) components, the page title component (mpj:PageTitle.vue) changes the title on the HTML document, and optionally also displays a title at the top of the page. The “date from now” component (mpj:DateFromNow.vue) is the most frequently reused component. Every time it is called, it generates a relative date, with the actual date/time as a tool tip; it also sets a timeout to update this every 10 seconds. This keeps the relative time in sync, even if the router destination stays active for a long time.

Finally, it's also worth mentioning that SFCs do not have to have all three sections defined. Thanks to conventions, and depending on your intended use, none of the sections are required. The “date from now” component only has a script section, while the privacy policy component only has a template section.

Component Interaction

Before we dive into the specifics of events, let's look again at Journal and RequestCard. In the current structure, RequestCard will always have Journal as a parent, and Journal will always have App as its parent. This means that RequestCard could, technically, get its toast implementation via this.$parent.$parent.toast; however, this type of coupling is very fragile2. Requiring toast as a parameter to RequestCard means that, wherever RequestCard is implemented, if it's given a toast parameter, it can display toasts for the actions that would occur on that request. Journal, as a direct descendant from App, can get its reference to the toast instance from its parent, then pass it along to child components; this only gives us one layer of dependency.

In Vue, generally speaking, parent components communicate with child components via props (which we see with passing the toast instance to RequestCard); child components communicate with parents via events. The names of events are not prescribed; the developer comes up with them, and they can be as terse or descriptive as desired. Events can optionally have additional data that goes with it. The Vue instance supports subscribing to event notifications, as well as emitting events. We can also create a separate Vue instance to use as an event bus if we like. myPrayerJournal uses both of these techniques in different places.

As an example of the first, let's look at the interaction between ActiveRequests (mpj:ActiveRequests.vue) and RequestListItem (mpj:RequestListItem.vue). On lines 41 and 42 of ActiveRequests (the parent), it subscribes to the requestUnsnoozed and requestNowShown events. Both these events trigger the page to refresh its underlying data from the journal. RequestListItem, lines 67 and 79, both use this.$parent.$emit to fire off these events. This model allows the child to emit events at will, and if the parent does not subscribe, there are no errors. For example, AnswerdRequests (mpj:AnsweredRequests.vue) does not subscribe to either of these events. (RequestListItem will not show the buttons that cause those events to be emitted, but even if it did, emitting the event would not cause an error.)

An example of the second technique, a dedicated parent/child event bus, can be seen back in Journal and RequestCard. Adding notes and snoozing requests are modal windows3. Rather than specifying an instance of these per request, which could grow rather quickly, Journal only instantiates one instance of each modal (lines 19-22). It also creates the dedicated Vue instance (line 46), and passes it to the modal windows and each RequestCard instance (lines 15, 20, and 22). Via this event bus, any RequestCard instance can trigger the notes or snooze modals to be shown. Look through NotesEdit (mpj:NotesEdit.vue) to see how the child listens for the event, and also how it resets its state (the closeDialog() method) so it will be fresh for the next request.

 

That wraps up our tour of Vue routes and components; next time, we'll take a look at Vuex, and how it helps us maintain state in the browser.


1 That's my summary; I'm sure they've got much more eloquent ways to describe it.

2 ...and kinda ugly, but maybe that's just me.

3 Up until nearly the end of development, editing requests was a modal as well. Adding recurrence made it too busy, so it got to be its own page.

Categorized under , ,
Tagged , , , , , , , , , , , ,