Web Services in EK9
This is the final section on constructs, the sections after this are related to Common Design Patterns, packaging and the compiler command line arguments. Web Services are deployed in conjunction with; and by Applications and Programs as shown in the introduction and structure sections.
These web services can be used for just rendering static/dynamic websites, or they can be used for microservices. The HTTPServer class is used in both cases as is the use of the service construct.
In its simplest form; it is possible to serve just pure file content. By using text constructs
a simple templating solution (similar to
velocity templates)
can be implemented. Web services in EK9 are also aimed at providing full support for REST.
But there is nothing to prevent developer creating full web UI tool kits.
The examples in this section show:
- Simple static html content served from in memory content
- A web server that just serves static file content
- Finally a simple REST (RPC) CRUD microservice.
Clearly it is possible to move beyond CRUD implementations and use GraphQL, HATEOAS with HAL/ATOM or a bespoke link content solution.
But as the focus of this documentation is just to show the mechanisms and APIs built into EK9
a simple CRUD application will suffice.
This final example does deal with caching, etags and concurrency.
So while the first two examples are short; the last example is quite long. It highlights
different language mechanisms (composition and a blend of dynamic functions and classes)
that can be used in EK9.
There is also a section on interaction with the developed web service, which is also quite long. This aims to highlight the general value of caching, pre-condition checks and general CRUD (RPC) type web service interactions.
Immutable software versus mutable configuration
You might argue that serving 'in memory' static content from an application is not really
viable for anything in production; databases and various other 'stores' should be used to store
this configuration data in isolation.
Wide spread use of 'docker' and automated CI/CD development cycles; means the speed and control of
deploying fully tested and version controlled microservices is just as easy as it is to update
configuration.
Automated deployments
For example this site is actually fully deployed in an automated way when the main repository is built and that
build is successful, as is the
Javadoc.
What this means is that deployment is full and complete (in terms of built artifacts) via a CI/CD pipeline. In this
case GitHub
actions are used.
If organisations have comprehensive CI/CD pipelines with automated tests, they can deploy services to live multiple times per day. If this is the case then change is simple and quick to do a new deployment.
As the move towards 'immutable' infrastructure has progressed, applications have also become 'immutable'.
This now means that unchanging configuration data and fixed information can be bound into
an application.
This one of the main reasons EK9 has the text construct. It is designed
to facilitate the notional separation of data from code; whilst still allowing it to be bound into a
version controlled release of an application. If you accept and value the use of 'caching'; then you
already have 'immutable' configuration data (at least for the period of a cache lifetime).
If you prefer the alternative approach of putting everything (including immutable data) into a database of some sort;
then you can continue to do that with EK9.
But it could be argued that only truly mutable data should be stored that way.
Clearly the CRUD example shown later should store the data in some sort of
resilient data store and not use a simple in memory model. But the focus of these examples is
Web Services not resilient data storage.
It really comes down to the confidence in automated testing, the speed of builds and the rapidity and automation in
deploying new software. In general, it probably boils down to 'fear' and 'blame' if we're being honest. Some
see updates to configuration data as somehow less risky than deploying a new version of software.
In some cases it may be more risky as there can be fewer controls in place.
As an aside, with technologies like 'cloud formation' and 'pulumi'/'terraform', you could take the approach of having all the following in a single source repository:
- Your code for the application being deployed
- The 'AWS' or 'Azure' or 'GCP etc. terraform code
- All the 'Unit Tests'
- All the 'Component Tests'
- Any stubs, fakes, mocks or mock services
- Any 'web service' contracts
- One or more links to predefined configurations
This approach then enables full deployments to be automated. Clearly 'secrets', like keys,
passwords, certificates etc. have to be externalised and provisioned separately.
But this does allow development teams to actually run the whole of the service under development,
and in a wide range of different scenarios.
Back now to the main point of this page - web services. The reason for raising the above points is that really web-services now fit into a much wider solution and are not limited to just 'developer' preferences.
Verbs and HTTP Headers
EK9 web services really focuses on caching support and operations like GET, POST, PUT, PATCH and DELETE. EK9 enforces stale content checking and promotes the use of etag in preference to last modified header use.
If any of these terms are unfamiliar to you, please read up or refresh your knowledge of HTTP protocols and REST web services in general as this document assumes prior knowledge.
The Examples
The examples below only focus on the service construct the application and program have been covered elsewhere.
Static HTML content
The following example shows the text construct being used to return some simple HTML content. The important part here is the fact that the content is served from site/index.html via the GET verb. This is in effect the 'route' that many other frameworks use.
The other important areas to focus on are the setting of the etag, status, content, contentType and finally but importantly the cacheControl.
These aspects are very important when developing web-services as they can really help improve performance by offloading processing to clients and caching proxies. At the cost of some data not being as fresh as a full end to end call.
#!ek9 defines module introduction defines text for "en" WebSite index() `<html> <head> <meta charset="UTF-8"/> </head> <body> <p>Hello>, World</p> </body> </html>` defines service Site :/site index() as GET for :/index.html <- response as HTTPResponse: () with trait HTTPResponse //Normally you'd use a component and inject it (i.e. a singleton with web site within) webSite <- WebSite("en") etag as String: String() override etag() <- rtn as String: String() etag :=? HMAC().SHA256(content()) rtn :=: etag override cacheControl() <- rtn as String: "public,max-age=3600,must-revalidate" override contentType() <- rtn as String: "text/html" override contentLanguage() <- rtn as String: "en" override content() <- rtn as String: webSite.index() override status() <- rtn as Integer: 200 ...
What's being shown
Even when dealing with just a simple HTML page; it is really important to focus on HTTP technology and techniques. There is a lot of capability in the HTTP protocol. The EK9 service construct has been created with the singular purpose of being the place to put all that (web service focussed) code.
This means the service construct is the place to coordinate dealing with content negotiation,
'varies', 'mime types' and data formats.
But most importantly the response should always be a 'dynamic' class that has the trait
of HTTPResponse.
This is really important as it is possible with EK9 to avoid doing any real
hard processing to get content if you use the etag or lastModified methods correctly.
If you also set the cacheControl then it is possible your code won't even get called
(once supplied the first time)!
The EK9 HTTP Server deals with the remote client (or hopefully the intermediate proxy put in front of the EK9 server). It calls on your response:
- Firstly it only gets the etag or lastModified
- then it checks if the calling system has passed any headers through
- if the headers are present then the EK9 HTTP server may respond with a 'not modified'
- if this is the case then the call to content() is never made
- only if needed will it make the most expensive call content() as it will trigger your business processing (which could access many Objects, Databases or other services)
In other words it does a lazy evaluation, trying to avoid the expensive call or getting the content if at all possible. So the web-service language construct in EK9 is much more like a 'framework', where you as an EK9 developer just plug in your code to fit the specifically pre-designed HTTP flows and EK9 APIs.
In the example above, the code is trivial; it just gets content from a text Object in memory. But by providing the additional cacheControl directives; a caching proxy like squid would not even make any call at all to your service if the cached content was still in date. But better than that even when it is out of date, the squid proxy would use the etag as part of a request header. The EK9 server will then just response with a 'not modified' if your code returns the same etag value.
Having a quick and 'cheap' way for getting a resources etag or lastModified value is
important.
It's hard to overstate how good the HTTP protocol is, it's also quite common to see it severely under
utilised.
A mini Web Server
The following example shows a really cut down web server, this application just serves text file content.
#!ek9 defines module introduction defines service WebServer :/website documentRoot() <- rtn as FileSystemPath: FileSystemPath(EnvVars().get("DOCUMENT_ROOT")) if not rtn? throw Exception("Invalid Document Root") file() as GET for :/{fileName} -> fileName as String :=: PATH "fileName" <- response as HTTPResponse: ( webServer: this, fileName: fileName ) with trait of HTTPResponse //Stateful variable of last time file was modified //Initially unset - as unknown lastModified as DateTime: DateTime() private lastModifiedOfTextFile() <- rtn as DateTime: DateTime() file <- textFile() if not file.isReadable() Stdout().println(`${file} is not readable`) if not file.isFile() Stdout().println(`${file} is not a file`) rtn :=: file.lastModified() private textFile() <- rtn as TextFile: TextFile(webServer.documentRoot() + FileSystemPath(fileName)) override lastModified() <- rtn as DateTime: DateTime() lastModified :=? lastModifiedOfTextFile() rtn :=: lastModified override cacheControl() <- rtn as String: "public,max-age=3600,must-revalidate" override contentType() <- rtn as String: "text/html" override contentLanguage() <- rtn as String: "en" override status() <- rtn as Integer: lastModified? <- 200:404 override content() <- rtn as String: String() if lastModified? cat textFile() > rtn ...
There are a some points of interest in this example.
- The location where files are served from is set from an environment variable (DOCUMENT_ROOT)
- The service WebServer serves from 'website/'
- The service accepts a 'placeholder' path variable on an end point called 'file'
- The anonymous dynamic class captures the WebServer instance and the fileName
- This service uses lastModified rather than etag for caching support
- The caching directive public,max-age=3600,must-revalidate really helps offload high volume calls
- The anonymous dynamic class has a number of utility methods: lastModifiedOfTextFile()/textFile()
- The standard EK9 file handling is used to locate the file
- A stream pipeline is used to read the contents of the file into a String to be returned
- HTTP status of 200 or 404 is given based on whether lastModified is set or not
You may be wondering why do all this processing inside a dynamic class that has a trait of HTTPResponse. There are a couple of reasons for this:
- As mentioned before: delayed processing
- Thread safety: there will be multiple client calls coming into the file() method in the 'WebServer' instance
- State can be held in each response (lastModified/fileName for example).
- As you will see in later examples; it is possible to use composition for common aspects of the response.
- Note that the HTTPResponse is quite short-lived and only services the single client that initiated it.
A CRUD type Server
This next example is much longer, it is based around a repository for postal Addresses. The definition of
the Address record and its marshalling to and from JSON is also included as an example.
EK9 does have the JSON type that can be used.
Etags and HTTP verbs: POST (C), GET (R), PUT (U), DELETE (D) and PATCH (merge) are covered. But attention is also paid to concurrency issues through the use of a mutex lock. This has been done through an example that shows how to take an unsafe (in terms of concurrency) collection and make it safe for multiple threads to access.
The approach of this code example has been to show a blend of functional and Object-Oriented approaches with a focus on composition. The composition approach has been extensively using in creating the HTTPResponse.
Some web service 'methods' have been left as 'long hand' so that it is more directly obvious what the processing is. But where the processing is so similar additional classes and functions have been composed in different ways to deliver the functionality.
The HTTP response codes '404' etc. have not been hidden, nor have they been abstracted to constants. They have been left as is. So '404' could have been abstracted to a constant of NOT_FOUND (but this adds little other than code length for this example).
The internal model
The first part (and bulk) of the code just sets up the data structures and constructs needed for the example. The second part will focus on just the web services aspects.
#!ek9 defines module introduction defines type //Just for strong typing - no constraints AddressId as String defines record Address id as AddressId: AddressId() street as String: String() street2 as String: String() city as String: String() state as String: String() zipcode as String: String() Address() -> from as Address this :=: from Address() -> from as Optional of Address if from? this :=: from.get() //Copy operator :=: -> from as Address id :=: from.id street :=: from.street street2 :=: from.street2 city :=: from.city state :=: from.state zipcode :=: from.zipcode //Merge only if incoming address parts are set operator :~: -> address as Address if address.street? street :=: address.street if address.street2? street2 :=: address.street2 if address.city? city :=: address.city if address.state? state :=: address.state if address.zipcode? zipcode :=: address.zipcode operator ? as pure //street2 is optional and can be omitted <- rtn as Boolean: id? and street? and city? and state? and zipcode? defines text for "en" AddressToOutputFormat //An example of how you could use the text construct to create very specific JSON if needed //Thought there is a JSON class if you'd prefer to use that. toJSON() -> address as Address `{ "address": { "id": "${address.id}", "street": "${address.street}", ${optionalJSON("street2", address.street2)} "city": "${address.city}", "state": "${address.state}", "zipcode": "${address.zipcode}" } }` defines function //Used for specific operations to be applied to a set of addresses addressOperation() as abstract -> addresses as AddressAccess address as Address //deals with wrapping the operation in the calls to deal with mutex lock. safeOperation() -> lockedAddressSet as MutexLock of AddressSet address as Address operation as addressOperation accessKey <- (address, operation) extends MutexKey of AddressSet as function operation(value, address) lockedAddressSet.enter(accessKey) addressFromJson() -> addressInJSONFormat as String <- rtn as Address: Address() addressParts <- addressDictionaryFromJSON(addressInJSONFormat) rtn.id: AddressId(addressParts.get("id")) rtn.street: String(addressParts.get("street")) rtn.street2: String(addressParts.get("street2")) rtn.city: String(addressParts.get("city")) rtn.state: String(addressParts.get("state")) rtn.zipcode: String(addressParts.get("zipcode")) addressDictionaryFromJSON() -> json as String <- rtn as Dict of (String, String): Dict() stdout <- Stdout() //Just pull out the address bits from within {}'s using a regex extractAddressPartsEx <- /\{\s+"address":\s+\{\s+([^}]*?)\s+\}\s+\}/ //Now break into lines based on commas addressItems <- json.group(extractAddressPartsEx).first().split(/,/) //Map to a dictionary and return. rtn: cat addressItems | map with toDictEntry | collect as Dict of (String, String) toDictEntry() -> line as String <- rtn as DictEntry of (String, String): DictEntry() keyValues <- line.trim().split(/:/) rtn: DictEntry(keyValues.first().trim().trim('"'), keyValues.last().trim().trim('"')) copyAddress() -> from as Address <- to as Address: Address(from) wrapInBrackets() -> value as String <- rtn as String: `[ ${value} ]` commaSeparated() -> firstPart String secondPart String <- rtn as String: firstPart? and secondPart? <- firstPart + "," + secondPart : String() addressToJSON() -> address as Address <- addressAsString as String: AddressToOutputFormat("en").toJSON(address) addressListToJSON() -> addresses as List of Address <- listAsString as String: cat addresses | map with addressToJSON | join with commaSeparated | map with wrapInBrackets | collect as String optionalJSON() -> name as String value as String <- rtn as String: value? <- `"${name}": "${value}",` else String() defines trait AddressAccess hash() <- rtn as String: String() hashOfAddress() -> id as AddressId <- rtn as String: String() byId() -> id as AddressId <- rtn as Address? listAll() <- rtn as List of Address: List() operator += -> address as Address assert address? operator -= -> address as Address assert address? //merge with an existing address operator :~: -> address as Address assert address? //replace an existing address operator :^: -> address as Address assert address? operator contains as pure -> addressId as AddressId <- rtn as Boolean: Boolean() defines class AddressSet with trait of AddressAccess hash as String: HMAC().SHA256(GUID()) addresses as Dict of (AddressId, Address): Dict() hashes as Dict of (AddressId, String): Dict() override hash() <- rtn as String: this.hash override hashOfAddress() -> id as AddressId <- rtn as String: String(hashes.get(id)) override byId() -> id as AddressId <- rtn as Address: Address(addresses.get(id)) override listAll() <- rtn as List of Address: List() iter <- addresses.values() cat iter | map with copyAddress > rtn private includeAddress() -> address as Address copy <- Address(address) addresses += DictEntry(copy.id, copy) hashes += DictEntry(copy.id, HMAC().SHA256(addressToJSON(copy))) updateHash() private updateHash() hash :=: HMAC().SHA256(GUID()) override operator += -> address as Address assert address? if this not contains address includeAddress(address) override operator -= -> address as Address assert address.id? addresses -= address.id hashes -= address.id updateHash() override operator :~: -> address as Address assert address.id? //We don't assert whole address because it can be partial currentAddress <- addresses.get(address.id) if currentAddress? //make a new copy and then merge the two. updatedAddress <- Address(currentAddress) updatedAddress :~: address includeAddress(updatedAddress) override operator :^: -> address as Address assert address? if this contains address includeAddress(address) operator contains as pure -> address as Address <- rtn as Boolean: this contains address.id override operator contains as pure -> addressId as AddressId <- rtn as Boolean: addresses contains addressId //Example of wrapping shared data set in a mutex lock. LockableAddressSet with trait of AddressAccess lockedAddressSet as MutexLock of AddressSet: MutexLock(AddressSet()) override hash() <- rtn as String: String() accessKey <- (rtn) is MutexKey of AddressSet as function rtn :=: value.hash() lockedAddressSet.enter(accessKey) override hashOfAddress() -> id as AddressId <- rtn as String: String() accessKey <- (id, rtn) is MutexKey of AddressSet as function rtn :=: value.hashOfAddress(id) lockedAddressSet.enter(accessKey) override byId() -> id as AddressId <- rtn as Address: Address() accessKey <- (id, rtn) is MutexKey of AddressSet as function rtn :=: value.byId(id) lockedAddressSet.enter(accessKey) override listAll() <- rtn as List of Address: List() accessKey <- (rtn) is MutexKey of AddressSet as function rtn += value.listAll() lockedAddressSet.enter(accessKey) override operator += -> address as Address //You can inline this simple dynamic function if you wish //Also used named parameters safeOperation( lockedAddressSet: lockedAddressSet, address: address, operation: () is addressOperation (addresses += address) ) override operator -= -> address as Address //Or you can inline all on one line, without naming the parameters. safeOperation(lockedAddressSet, address, () is addressOperation (addresses -= address)) override operator :~: -> address as Address //Or define a dynamic function and pass in as delegate (my preferred way) operation <- () is addressOperation addresses :~: address safeOperation(lockedAddressSet, address, operation) override operator :^: -> address as Address operation <- () is addressOperation addresses :^: address safeOperation(lockedAddressSet, address, operation) override operator contains as pure -> addressId as AddressId <- rtn as Boolean: false accessKey <- (addressId, rtn) is MutexKey of AddressSet as function rtn :=: value contains addressId lockedAddressSet.enter(accessKey) defines function plainNonCacheableHTTPResponse() <- rtn as HTTPResponse: () with trait of HTTPResponse override cacheControl() <- rtn as String: "no-store,max-age=0" override contentType() <- rtn as String: "text/plain" override contentLanguage() <- rtn as String: "en" cacheableHTTPResponse() <- rtn as HTTPResponse: () with trait of HTTPResponse override cacheControl() <- rtn as String: "public,max-age=5,must-revalidate" override contentType() <- rtn as String: "application/json" override contentLanguage() <- rtn as String: "en" defines class ByETagHTTPResponse with trait of HTTPResponse by delegate repository as Repository! delegate as HTTPResponse? addressId as AddressId? provideContentLocation as Boolean? status as Integer: 200 etagOfAddress as String: String() default private ByETagHTTPResponse() ByETagHTTPResponse() -> addressId as AddressId delegate as HTTPResponse provideContentLocation as Boolean assert addressId? and delegate? and provideContentLocation? this.addressId: addressId this.provideContentLocation: provideContentLocation this.delegate: delegate ByETagHTTPResponse() -> addressId as AddressId delegate as HTTPResponse this(addressId, delegate, true) override etag() <- rtn as String: String() //Only call if un-set. etagOfAddress :=? repository.addresses().hashOfAddress(addressId) rtn :=: etagOfAddress if ~etagOfAddress? status: 404 override status() -> newStatus as Integer status :=: newStatus override status() <- rtn as Integer: status override contentLocation() <- rtn as String: status < 400 and provideContentLocation <- `/addresses/${addressId}` else String() ...
The code above shows a mix of EK9 constructs, these are used to hold a set of Addresses ()record) in memory, but limit access via a mutex lock. The use of dynamic functions as delegates has been employed for this.
By extracting AddressAccess out to a trait construct two implementations can be defined. The first is the actual address storage and the second is just a simple thread safe wrapper.
The next and final part covers the service definition. This shows the use of the caching HTTPResponse implementations.
... defines service //The Name of the and the uri it is mapped to Addresses :/addresses byId() as GET for :/{address-id} -> id as String :=: PATH "address-id" //required because different name <- response as HTTPResponse? delegate <- ByETagHTTPResponse(addressId, cacheableHTTPResponse(), false) addressId <- AddressId(id) response: (addressId, delegate) with trait of HTTPResponse by delegate //Expect the repository to be injected repository as Repository! override content() <- rtn as String: String() if delegate.status() <> 404 rtn: addressToJSON(repository.addresses().byId(addressId)) status(200) //A POST operator += :/ -> request as HTTPRequest :=: REQUEST <- response as HTTPResponse: ( request: request, nonCacheable: plainNonCacheableHTTPResponse() ) with trait of HTTPResponse by nonCacheable //Expect the repository to be injected repository as Repository! address as Address: Address() status as Integer: 201 override content() <- rtn as String: String() address: addressFromJson(request.content()) //But the server sets the ID! if address.id? status := 422 //unprocessable entity rtn: "Do not supply ID in Address, server will set this" else address.id: AddressId(GUID()) if ~address? status := 422 //unprocessable entity else if repository.addresses() contains address.id status := 409 //conflict else repository.addresses() += address override contentLocation() <- rtn as String: status == 201 <- `/addresses/${address.id}` else String() override status() <- rtn as Integer: status //A DELETE operator -= :/{id} -> id as String //Assume PATH <- response as HTTPResponse: ( addressId: AddressId(id), delegate: ByETagHTTPResponse( addressId, plainNonCacheableHTTPResponse(), false ) ) with trait of HTTPResponse by delegate repository as Repository! override content() <- rtn as String: String() //Only if the etag was found can we delete it! if delegate.status() <> 404 repository.addresses() -= repository.addresses().byId(addressId) status(204) //A PATCH which is a merge //Note with dynamic variable capture, you can still do it all one one line if you wish. operator :~: :/{id} -> id as String //Assume PATH incomingContent as String :=: CONTENT <- response as HTTPResponse: ( addressId: AddressId(id), incomingContent: incomingContent, delegate: ByETagHTTPResponse(addressId, plainNonCacheableHTTPResponse()) ) trait HTTPResponse by delegate repository as Repository! override content() <- rtn as String: String() if delegate.status() <> 404 address <- addressFromJson(incomingContent) if ~address.id? status(422) //unprocessable entity else if address.id <> addressId status(400) //the id on the url is not the same as the id in the body content else repository.addresses() :~: address status(204) //A PUT which is a replace for an existing address //Now the dynamic variable capture allowed multiple lines and various formatting within the () operator :^: :/{id} -> id as String //Assume PATH content as String :=: CONTENT <- response as HTTPResponse: ( addressId: AddressId(id), incomingContent: content, delegate: ByETagHTTPResponse(addressId, plainNonCacheableHTTPResponse()) ) trait HTTPResponse by delegate repository as Repository! override content() <- rtn as String: String() if delegate.status() <> 404 address <- addressFromJson(incomingContent) if ~address? status(422) //unprocessable entity else if address.id <> addressId status(400) //the id on the url is not the same as the value in the body content else repository.addresses() :^: address status(204) //Note it is now possible to use named parameters in dynamic variable capture //So now, we can do a simpler one liner, define the name of the property and use an expression //to set it up, then use the 'by' on a trait to delegate as much or as little as we want to it. listAll() :/ <- response as HTTPResponse: ( cacheable: cacheableHTTPResponse() ) with trait of HTTPResponse by cacheable repository as Repository! override etag() <- rtn as String: repository.addresses().hash() override content() <- rtn as String: addressListToJSON(repository.addresses().listAll()) defines component Repository as abstract addresses() as abstract <- rtn as AddressAccess? InMemoryRepository extends Repository addresses as AddressAccess: LockableAddressSet() InMemoryRepository() addresses += Address(AddressId(GUID()), "121 Admin Rd.", String(), "Concord", "NH", "03301") addresses += Address(AddressId(GUID()), "67 Paperwork Ave", String(), "Manchester", "NH", "03101") addresses += Address(AddressId(GUID()), "15 Rose St", "Apt. B-1", "Concord", "NH", "03301") addresses += Address(AddressId(GUID()), "39 Sole St.", String(), "Concord", "NH", "03301") addresses += Address(AddressId(GUID()), "99 Mountain Rd.", String(), "Concord", "NH", "03301") override addresses() <- rtn as AddressAccess: addresses defines application AccessPoint //We could register other services and components here register InMemoryRepository() as Repository register Addresses() defines program TestAddressOutput with application of AccessPoint //Expect injection repository as Repository! stdout <- Stdout() address1 <- Address(AddressId(GUID()), "121 Admin Rd.", String(), "Concord", "NH", "03301") address2 <- Address(AddressId(GUID()), "15 Rose St", "Apt. B-1", "Concord", "NH", "03301") addresses <- [address1, address2] iter <- addresses.iterator() //promote iterator to a list backToList <- #^iter stdout.println("Address as JSON") stdout.println(addressToJSON(address1)) jsonAddress <- addressFromJson(addressToJSON(address1)) stdout.println("Rebuilt address is [" + addressToJSON(jsonAddress) + "]") stdout.println("Now arrays") stdout.println(addressListToJSON(addresses)) stdout.println("Done basics now all addresses") allAddresses <- repository.addresses().listAll() stdout.println(`Hash of all addresses is [${repository.addresses().hash()}]`) stdout.println(addressListToJSON(allAddresses)) stdout.println("Done!") //EOF
That's quite a long example with quite a few constructs to take in and understand. But hopefully now you can see the role the 'service' construct plays and how important the mutex locks and 'injection' of 'components' are.
Software development versus solution deployment
These next few paragraphs are an aside from EK9 and Web Services and are a more general observation (which you may or may not find you agree with).
A bit of history
There was a time when a software developer would understand the full end to end processing of an application. This would also include the physical hardware, the networking and even where the database (if used) was installed on the disks (the speed of the disks). But as each area has come more specialised, software development has become fragmented (front-end, back-end, DBA, DBD, network engineer, security specialist). Then drawing all that together is your Enterprise Architect! These statements are not intended to be offensive (and your experiences may be very different). But hopefully you can see this point being made.
In short; as a pure software developer (coder) now; you may not even know about caching proxies like squid. When there is a performance problem with 'your' application; you decide it needs more hardware, CPU's or the database needs more memory etc. Maybe it even needs re-designing! Maybe the DBD needs to take a look at the database design and 'do some magic'.
The Experts
You'll probably find (if you work for an organisation - rather that just starting out developing software);
there are a number of 'Characters', there is some sort of 'pecking order' or companies have specific histories
with specific technologies or vendors.
In short techie politics or just plain office politics. What I'm saying is that most organisations are truly
dysfunctional, have distorted views, are technologically out of balance and are mismanaged.
But you still have to develop software in that environment (if you want to be paid) - but let's just be honest that's what it is like.
Applications in a wider deployment context
Why is this being discussed here and now?
It's because we've left the safe realm of just coding
(where we can argue about indentation and the merits of 'for loops'). We're
now in the realm of where that coded software fits in a wider solution. It is now starting to show
its characteristics for function/performance and reliability. It is no longer just visible to the
software developers that created it (with the appropriate indentation after much debate).
If you are an expert with a hammer (all things look like nails) - lets pick on DBD's - 'your database needs views layering on it'!
Now lets pick on Unix/Linux/Windows Guru - 'you need OS version X with hyper something or other'.
Or maybe the Java/Spring master - 'Ah Spring boot will solve your problems'.
Lets not even start on Docker and Kubernetes guys, they'll build you a home crafted lumpy 'EJB' container. (Maybe that's being a bit harsh).
Finally the front end guys - we need to reinvent the UI framework wheel again!
The point being made here; is that each one of those experts may or may not have point. What they offer really may be of value (or it may not). Now the politics - it's very hard to get people to be objective about the issues faced and accept their solution might not be applicable. Depending on the force of character, the technological history or just poor management - you'll get certain solutions promoted/over what might be the most appropriate one.
See this article about the schisms inside organisations.
But in general the most robust/reliable/cost-effective software is software that does not get written!
That's probably not what you want to hear; as the developer of a new language (EK9) it's not what I want to say! But I have to be honest - even if it means writing no new code in a new wonderful language (EK9).
Development
Even when developing the compiler for this EK9 language, I try and follow these ideas:
- Write an expected output for each input (TDD)
- Write very clear code
- Write the minimal amount of code as possible
- Avoid calling that code as much as possible
- Go back to first principles when you have any sort of issue
- Ask the un-askable questions
- Be honest (especially with yourself)
- Accept you will make mistakes and not always follow the above (as will others) - be nice to yourself and others.
Anyway back to something techie - let's forget all that politics stuff and bury our heads back in the sand (interesting technology) - see be honest - that's what you and I both wanted at this point.
Networking/Caching
If you can get some sort of fast cache to supply the content your code provides that is cheaper; supports a higher volume of concurrent calls and is reasonably current, then do just that. There are times when you can't, but when you can; do.
If you focus on caching and can accept there is always some time when the user is looking at stale
information then you can take the load of your application with something like nginx,
squid or AWS 'API gateway' for example. Take care with your responses and 'varies'.
However, if and when you really depend on caching to save load there are some downsides.
If your systems are restarted for some reason and your caches are 'cold', all requests will be directed to your origin server (i.e. your code). So suddenly your code and the machines running it gets the full force of what that cache has been saving you from. BANG! most likely the machine will fall over.
So when you are at the extremes of loading and are using caching to really help, when you restart systems you must have operational procedures that ensure that the initial load is throttled in some way. This then enables all your caches to warm up. The alternative is to keep your system offline to the full load that is going to be applied; simulate calls through via your own scripts, this then warms all the caches.
The main point I'm attempting to make here is that 'devops' does actually mean something. If you are developing code and are also operating that code, you need to move out of your comfort zone and embrace 'operations', 'infrastructure', 'networking' and appliances (like caching software).
The paragraphs above are why the web service examples and the HTTPResponse look quite verbose. This is by design, EK9 does not try and hide the HTTP protocol away, it draws it right out, so you can master it and provide sophisticated and performant solutions. The examples could have been written more like little 'Hello, World' examples. Then I could claim; I can do a web service in EK9 is 10 lines of code. But this rather misses the point of the examples, and why/how web services add quite a lot of functionality, but also complexity. Just look at the number of variation of HTTP headers and response codes and the different 'verbs'.
Summary
The HTTP Server built into EK9 is not designed to be the most configurable nor the most flexible. It is designed to get out of the way. This should enable you as the developer to focus on the services you offer, the data formats needed and caching support.
If you want high performance, use caching (i.e. hardly call the origin server at all). Even with dynamic data and short cache life times; this can be done with must-revalidate. Use multiple instances via docker and Kubernetes for scaling when the code really does have to be called.
You can also use Command Query Responsibility Segregation. While this can add complexity, it does allow 'read' requests to scale through the use of 'replicas'. But does require some tradeoffs in terms of very fast data replication or acceptance of short term inconsistency of data from a client view point.
You cannot really do web services without focusing on caching, HTTP response codes and concurrent access to data. Web Services also fit into a wider infrastructure - with various components enabling/requiring certain behaviour from software components.
Next Steps
Some of the common patterns of use are covered in the next section on Common Design Patterns.
But if you are looking for more details on the command line parameters see the command line section.