Modern apps place high demands on front-end developers. Web apps require complex functionality, and the lion’s share of that work is falling to front-end devs:
building modern, accessible user interfaces
creating interactive elements and complex animations
managing complex application state
meta-programming: build scripts, transpilers, bundlers, linters, etc.
reading from REST, GraphQL, and other APIs
middle-tier programming: proxies, redirects, routing, middleware, auth, etc.
This list is daunting on its own, but it gets really rough if your tech stack doesn’t optimize for simplicity. A complex infrastructure introduces hidden responsibilities that introduce risk, slowdowns, and frustration.
Depending on the infrastructure we choose, we may also inadvertently add server configuration, release management, and other DevOps duties to a front-end developer’s plate.
The sneaky middle tier — where front-end tasks can balloon in complexity
Let’s look at a task I’ve seen assigned to multiple front-end teams: create a simple REST API to combine data from a few services into a single request for the frontend. If you just yelled at your computer, “But that’s not a frontend task!” — I agree! But who am I to let facts hinder the backlog?
An API that’s only needed by the frontend falls into middle-tier programming. For example, if the front end combines the data from several backend services and derives a few additional fields, a common approach is to add a proxy API so the frontend isn’t making multiple API calls and doing a bunch of business logic on the client side.
There’s not a clear line to which back-end team should own an API like this. Getting it onto another team’s backlog — and getting updates made in the future — can be a bureaucratic nightmare, so the front-end team ends up with the responsibility.
This is a story that ends differently depending on the architectural choices we make. Let’s look at two common approaches to handling this task:
Build an Express app on Node to create the REST API
Use serverless functions to create the REST API
Express + Node comes with a surprising amount of hidden complexity and overhead. Serverless lets front-end developers deploy and scale the API quickly so they can get back to their other front-end tasks.
Solution 1: Build and deploy the API using Node and Express (and Docker and Kubernetes)
Earlier in my career, the standard operating procedure was to use Node and Express to stand up a REST API. On the surface, this seems relatively straightforward. We can create the whole REST API in a file called server.js:
const express = require('express');
const PORT = 8080;
const HOST = '0.0.0.0';
const app = express();
app.use(express.static('site'));
// simple REST API to load movies by slug
const movies = require('./data.json');
app.get('/api/movies/:slug', (req, res) => {
const { slug } = req.params;
const movie = movies.find((m) => m.slug === slug);
res.json(movie);
});
app.listen(PORT, HOST, () => {
console.log(`app running on http://${HOST}:${PORT}`);
});
This code isn’t too far removed from front-end JavaScript. There’s a decent amount of boilerplate in here that will trip up a front-end dev if they’ve never seen it before, but it’s manageable.
If we run node server.js, we can visit http://localhost:8080/api/movies/some-movie and see a JSON object with details for the movie with the slug some-movie (assuming you’ve defined that in data.json).
Deployment introduces a ton of extra overhead
Building the API is only the beginning, however. We need to get this API deployed in a way that can handle a decent amount of traffic without falling down. Suddenly, things get a lot more complicated.
We need several more tools:
somewhere to deploy this (e.g. DigitalOcean, Google Cloud Platform, AWS)
a container to keep local dev and production consistent (i.e. Docker)
a way to make sure the deployment stays live and can handle traffic spikes (i.e. Kubernetes)
At this point, we’re way outside front-end territory. I’ve done this kind of work before, but my solution was to copy-paste from a tutorial or Stack Overflow answer.
The Docker config is somewhat comprehensible, but I have no idea if it’s secure or optimized:
FROM node:14
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
CMD [ "node", "server.js" ]
Next, we need to figure out how to deploy the Docker container into Kubernetes. Why? I’m not really sure, but that’s what the back end teams at the company use, so we should follow best practices.
Our initial task of “stand up a quick Node API” has ballooned into a suite of tasks that don’t line up with our core skill set. The first time I got handed a task like this, I lost several days getting things configured and waiting on feedback from the backend teams to make sure I wasn’t causing more problems than I was solving.
Some companies have a DevOps team to check this work and make sure it doesn’t do anything terrible. Others end up trusting the hivemind of Stack Overflow and hoping for the best.
With this approach, things start out manageable with some Node code, but quickly spiral out into multiple layers of config spanning areas of expertise that are well beyond what we should expect a frontend developer to know.
Solution 2: Build the same REST API using serverless functions
If we choose serverless functions, the story can be dramatically different. Serverless is a great companion to Jamstack web apps that provides front-end developers with the ability to handle middle tier programming without the unnecessary complexity of figuring out how to deploy and scale a server.
There are multiple frameworks and platforms that make deploying serverless functions painless. My preferred solution is to use Netlify since it enables automated continuous delivery of both the front end and serverless functions. For this example, we’ll use Netlify Functions to manage our serverless API.
Using Functions as a Service (a fancy way of describing platforms that handle the infrastructure and scaling for serverless functions) means that we can focus only on the business logic and know that our middle tier service can handle huge amounts of traffic without falling down. We don’t need to deal with Docker containers or Kubernetes or even the boilerplate of a Node server — it Just Works™ so we can ship a solution and move on to our next task.
First, we can define our REST API in a serverless function at netlify/functions/movie-by-slug.js:
To add the proper routing, we can create a netlify.toml at the root of the project:
[[redirects]]
from = "/api/movies/*"
to = "/.netlify/functions/movie-by-slug"
status = 200
This is significantly less configuration than we’d need for the Node/Express approach. What I prefer about this approach is that the config here is stripped down to only what we care about: the specific paths our API should handle. The rest — build commands, ports, and so on — is handled for us with good defaults.
If we have the Netlify CLI installed, we can run this locally right away with the command ntl dev, which knows to look for serverless functions in the netlify/functions directory.
Visiting http://localhost:888/api/movies/booper will show a JSON object containing details about the “booper” movie.
So far, this doesn’t feel too different from the Node and Express setup. However, when we go to deploy, the difference is huge. Here’s what it takes to deploy this site to production:
Commit the serverless function and netlify.toml to repo and push it up on GitHub, Bitbucket, or GitLab
Use the Netlify CLI to create a new site connected to your git repo: ntl init
That’s it! The API is now deployed and capable of scaling on demand to millions of hits. Changes will be automatically deployed whenever they’re pushed to the main repo branch.
Using serverless functions allows front-end developers to complete middle-tier programming tasks without taking on the additional boilerplate and DevOps overhead that creates risk and decreases productivity.
If our goal is to empower frontend teams to quickly and confidently ship software, choosing serverless functions bakes productivity into the infrastructure. Since adopting this approach as my default Jamstack starter, I’ve been able to ship faster than ever, whether I’m working alone, with other front-end devs, or cross-functionally with teams across a company.
I’ve been using Local for ages. Four years ago, I wrote about how I got all my WordPress sites running locally on it. I just wanted to give it another high five because it’s still here and still great. In fact, much great than it was back then.
Disclosure, Flywheel, the makers of Local, sponsor this site, but this post isn’t sponsored. I just wanted to talk about a tool I use. It’s not the only player in town. Even old school MAMP PRO is has gotten a lot better and many devs seem to like it. People that live on the command line tend to love Laravel Valet. There is another WordPress host getting in on the game here: DevKinsta.
The core of Local is still very much the same. It’s an app you run locally (Windows, Mac, or Linux) and it helps you spin up WordPress sites incredibly easily. Just a few choices and clicks and it’s going. This is particularly useful because WordPress has dependencies that make it run (PHP, MySQL, a web server, etc) and while you can absolutely do that by hand or with other tools, Local does it in a containerized way that doesn’t mess with your machine and can help you run locally with settings that are close to or entirely match your production site.
That stuff has always been true. Here are things that are new, compared to my post from four years ago!
Sites start up nearly instantaneously. Maybe around a year or a bit more ago Local had a beta build they dubbed Local “Lightning” because it was something of a re-write that made it way faster. Now it’s just how Local works, and it’s fast as heck.
You can easily pull and push sites to production (and/or staging) very easily. Back then, you could pull I think but not push. I still wire up my own deployment because I usually want it to be Git-based, but the pulling is awfully handy. Like, you sit down to work on a site, and first thing, you can just yank down a copy of production so you’re working with exactly what is live. That’s how I work anyway. I know that many people work other ways. You could have your local or staging environment be the source of truth and do a lot more pushing than pulling.
Instant reload. This is refreshing for my little WordPress sites where I didn’t even bother to spin up a build process or Sass or anything. Usually, those build processes also help with live reloading, so it’s tempting to reach for them just for that, but no longer needed here. When I do need a build process, I’ll often wire up Gulp, but also CodeKit still works great and its server can proxy Local’s server just fine.
One-click admin login. This is actually the feature that inspired me to write this post. Such a tiny quality of life thing. There is a button that says Admin. You can click that and, rather than just taking you to the login screen, it auto-logs you in as a particular admin user. SO NICE.
There is a plugin system. My back-end friends got me on TablePlus, so I love that there is an extension that allows me to one-click open my WordPress DBs in TablePlus. There is also an image optimizer plugin, which scans the whole site for images it can make smaller. I just used that the other day because might as well.
That’s not comprehensive of course, it’s just a smattering of features that demonstrate how this product started good and keeps getting better.
Bonus: I think it’s classy how they shout out to the open source shoulders they stand on:
Typically, a single favicon is used across a whole domain. But there are times you wanna step it up with different favicons depending on context. A website might change the favicon to match the content being viewed. Or a site might allow users to personalize their theme colors, and those preferences are reflected in the favicon. Maybe you’ve seen favicons that attempt to alert the user of some event.
Multiple favicons can technically be managed by hand — Chris has shown us how he uses two different favicons for development and production. But when you reach the scale of dozens or hundreds of variations, it’s time to dynamically generate them.
This was the situation I encountered on a recent WordPress project for a directory of colleges and universities. (I previously wrote about querying nearby locations for the same project.) When viewing a school’s profile, we wanted the favicon to use a school’s colors rather than our default blue, to give it that extra touch.
With over 200 schools in the directory and climbing, we needed to go dynamic. Fortunately, we already had custom meta fields storing data on each school’s visual identity. This included school colors, which were being used in the stylesheet. We just needed a way to apply these custom meta colors to a dynamic favicon.
In this article, I’ll walk you through our approach and some things to watch out for. You can see the results in action by viewing different schools.
Each favicon is a different color in the tabs based on the school that is selected.
SVG is key
Thanks to improved browser support for SVG favicons, implementing dynamic favicons is much easier than days past. In contrast to PNG (or the antiquated ICO format), SVG relies on markup to define vector shapes. This makes them lightweight, scaleable, and best of all, receptive to all kinds of fun.
The first step is to create your favicon in SVG format. It doesn’t hurt to also run it through an SVG optimizer to get rid of the cruft afterwards. This is what we used in the school directory:
Hooking into WordPress
Next, we want to add the favicon link markup in the HTML head. How to do this is totally up to you. In the case of WordPress, it could be added it to the header template of a child theme or echo’d through a wp_head() action.
Here we’re checking that the post type is school, and grabbing the school’s color metadata we’ve previously stored using get_post_meta(). If we do have a color, we pass it into favicon.php through the query string.
From PHP to SVG
In a favicon.php file, we start by setting the content type to SVG. Next, we save the color value that’s been passed in, or use the default color if there isn’t one.
Then we echo the large, multiline chunk of SVG markup using PHP’s heredoc syntax (useful for templating). Variables such as $color are expanded when using this syntax.
Finally, we make a couple modifications to the SVG markup. First, classes are assigned to the color-changing elements. Second, a style element is added just inside the SVG element, declaring the appropriate CSS rules and echo-ing the $color.
Instead of a <style> element, we could alternatively replace the default color with $color wherever it appears if it’s not used in too many places.
With that, you’ve got a dynamic favicon working on your site.
Security considerations
Of course, blindly echo-ing URL parameters opens you up to hacks. To mitigate these, we should sanitize all of our inputs.
In this case, we‘re only interested in values that match the 3-digit or 6-digit hex color format. We can include a function like WordPress’s own sanitize_hex_color_no_hash() to ensure only colors are passed in.
function sanitize_hex_color( $color ) {
if ( '' === $color ) {
return '';
}
// 3 or 6 hex digits, or the empty string.
if ( preg_match( '|^#([A-Fa-f0-9]{3}){1,2}$|', $color ) ) {
return $color;
}
}
function sanitize_hex_color_no_hash( $color ) {
$color = ltrim( $color, '#' );
if ( '' === $color ) {
return '';
}
return sanitize_hex_color( '#' . $color ) ? $color : null;
}
You’ll want to add your own checks based on the values you want passed in.
Caching for better performance
Browsers cache SVGs, but this benefit is lost for PHP files by default. This means each time the favicon is loaded, your server’s being hit.
To reduce server strain and improve performance, it’s essential that you explicitly cache this file. You can configure your server, set up a page rule through your CDN, or add a cache control header to the very top of favicon.php like so:
In our tests, with no caching, our 1.5 KB SVG file took about 300 milliseconds to process on the first load, and about 100 milliseconds on subsequent loads. That’s pretty lousy. But with caching, we brought this down to 25 ms from CDN on first load, and 1 ms from browser cache on later loads — as good as a plain old SVG.
Browser support
If we were done there, this wouldn’t be web development. We still have to talk browser support.
One caveat is that Firefox requires the attribute type="image/svg+xml" in the favicon declaration for it to work. The other browsers are more forgiving, but it‘s just good practice. You should include sizes="any" while you’re at it.
Safari doesn‘t support SVG favicons as of yet, outside of the mask icon feature intended for pinned tabs. In my experimentation, this was buggy and inconsistent. It didn’t handle complex shapes or colors well, and cached the same icon across the whole domain. Ultimately we decided not to bother and just use a fallback with the default blue fill until Safari improves support.
Fallbacks
As solid as SVG favicon support is, it‘s still not 100%. So be sure to add fallbacks. We can set an additional favicon for when SVG icons aren’t supported with the rel="alternative icon" attribute:
To make the site even more bulletproof, you can also drop the eternal favicon.ico in your root.
Going further
We took a relatively simple favicon and swapped one color for another. But taking this dynamic approach opens the door to so much more: modifying multiple colors, other properties like position, entirely different shapes, and animations.
For instance, here’s a demo I’ve dubbed Favicoin. It plots cryptocurrency prices in the favicon as a sparkline.
Implementing dynamic favicons like this isn’t limited to WordPress or PHP; whatever your stack, the same principles apply. Heck, you could even achieve this client-side with data URLs and JavaScript… not that I recommend it for production.
But one thing‘s for sure: we’re bound to see creative applications of SVG favicons in the future. Have you seen or created your own dynamic favicons? Do you have any tips or tricks to share?
Hello and welcome back to Equity, TechCrunch’s venture capital-focused podcast, where we unpack the numbers behind the headlines.
This week had the whole crew aboard to record: Grace and Chris making us sound good, Danny to provide levity, Natasha to actually recall facts, and Alex to divert us from staying on topic. It’s teamwork, people – and our transitions are proof of it.
And it’s good that we had everyone around the virtual table as there was quite a lot to get through:
Team felt all kinds of ways about the Amazon-MGM deal. Some of us are more positive about than the rest, but what gists out from the transaction is that for Amazon, the purchase price is modest and the company is famously playing a supposedly long-game. Let’s see how James Bond fits into it. Alex receives four points for not bringing up F1 thanks to the Bond-Aston Martin connection.
After launching last June with just $2 million, Collab Capital has closed its debut fund at its target goal: $50 million. The Black-led firm invests exclusively in Black-led startups, and got checks from Apple, PayPal, and Mailchimp to name a few. We talk about this feat, and note a few other Black-led venture capital firms making waves in the industry lately.
We Resolved our transition puns and eventually spoke about the Affirm spin-out, which raised $60 million in a funding round for BNPL for businesses. There’s bigger questions there around the accessibility and point of BNPL, and if its really re-inventing the wheel or just repackaging it with simpler UX.
Next up, we got into a can of worms about the future of meetings thanks to Rewatch, which raised a $20 million Series A this week led by Andreessen Horowitz. The startup helps other startups create internal, private Youtubes to archive their meetings and any video-based comms. We could only spend a second on this, so if you want our longer thoughts in the form of text, check out our 3 views on the topic on Extra Crunch! (Discount Code: Equity)
From there we had Interactioand Fireflies.ai, two more startups that are tackling the complexities of meetings in the COVID-19 era, and whatever comes next. Both recently raised new funding, and Alex brought up Kudo to add one more upstart to the mix.
Noom, a weight loss platform, bulked up with $540 million in funding after nearly doubling its revenue from 2019 to 2020. The pandemic has made many people gain weight, but we chew into why Noom’s moment might be right now after a decade in the works.
Thanks for hanging out this week, Equity is back on Tuesday with our usual weekly kickoff, thanks to the American holiday on Monday. Chat then, unless you want to follow us on Twitter and get a first-look at all of Chris’ meme work.
Equity drops every Monday at 7:00 a.m. PST, Wednesday, and Friday morning at 7:00 a.m. PST, so subscribe to us on Apple Podcasts, Overcast, Spotify and all the casts.
from Amazon – TechCrunch https://ift.tt/3p3myB1
via IFTTT
Amazon, in its ever-growing desire to become a super app in India, is testing a new category to persuade users to spend more time on the shopping service: Feature articles.
The American e-commerce giant has quietly launched “Featured Articles” on its shopping app and website in India that showcases feature articles, commentary and analysis on a wide-range of topics including politics, governance, entertainment, sports, business, finance, health, fitness, books, and food. The articles are sourced from several large local media houses and magazines.
Some of these articles are “exclusively” available on Amazon, the company says on the website. To drive engagement, Amazon is also sending notifications to some Kindle users.
An Amazon spokesperson confirmed the new feature to TechCrunch, adding, “we remain focused on creating new and engaging experiences for our customers and as part of this endeavour, we have been testing a new service that brings articles on different topics like current affairs, books, business, entertainment, sports and lifestyle amongst others for readers.”
This isn’t the first time Amazon has explored integrating some reading material to its shopping service in India. In 2018, Amazon India started to feature some gadget reviews and listicles, sourced from local media houses.
from Amazon – TechCrunch https://ift.tt/3p5403p
via IFTTT
On-demand grocery startups like Gorillas are invading Europe right now, but although on-demand-everything is kinda old-hat in the Bay Area, a new startup thinks it might just be able to do something new.
Food Rocket says it has raised a $2 million investment round from AltaIR Capital, Baring Vostok fund, and the AngelsDeck group of business angels, including Philipp Bashyan, of Russia’s Yonder, who has joined as an investor and advisor.
Yes, admittedly ok this tiny startup is competing with DoorDash, GoPuff, InstaCart and Amazon Fresh. Maybe let’s not into that…
Using the company’s mobile app, users can order fresh groceries, ready-to-eat meals, and household goods that will be delivered within 10-15 minutes, says the startup, which will be servicing SoMa, South Park, Mission Bay, Japantown, Hayes Valley, and others. The company hopes to open 150 ‘dark stores’ on the West Coast as part of its infrastructure.
Vitaly Aleksandrov, CEO, and co-founder of Food Rocket said: “The level of competition in this market in the U.S. is still manageable, which is why we have the opportunity to become leaders in the sphere of fast delivery of basic products and household goods. We aim to replace brick-and-mortar supermarkets and to change consumers’ current habits in regards to grocery shopping.”
What can we say? Good luck?
from Amazon – TechCrunch https://ift.tt/3hYgY1f
via IFTTT
It’s very popular to put a $ on lines that are intended to be a command in code documentation that involves the terminal (i.e. the command line).
Like this:
$ brew install somepackage
The point of that is that it mimics the prompt that you (may) see on your command line. Here’s mine:
So the dollar sign ($) is a little technique that people use to indicate this line of code is supposed to be run on the command line.
Minor trouble
The trouble with that is that I (and I’ll wager most other people too) will copy and paste commands like that from that documentation.
If I run that command above in my terminal exactly as it’s written…
…it doesn’t work. $ is not a command. How do you deal with this? You just have to know. You just need to have had this problem before and somehow learned that what the documentation is actually telling you is to run the command brew install somepackage (without the dollar sign) at the command line.
I say minor trouble as there are all sorts of stuff like this in every job in the world. When I put something like font-size: 2.2rem in a blog post, I don’t also say, “Put that declaration in a ruleset in a CSS file that your HTML file links to.” You just have to know those those things.
Fixing it with CSS
The fact that it’s only minor trouble and that tech is laden with things you just need to know doesn’t mean that we can’t try to fix this and do a little better.
The idea for this post came from this tweet that got way more likes than I thought it would:
If you really think it's more understandable to put a $ in docs for terminal commands, e.g.
$ brew install buttnugget
then a UX touch is:
code.command::before {
content: "$ ";
}
so that I don't copy it when I copy the command and have it fail in the actual terminal.
Now you can insert the $ as a pseudo-element rather than as actual text:
code.command::before {
content: "$ ";
}
Now you aren’t just saving yourself a character in the HTML, the $ cannot be selected, because that’s how pseudo-elements work. So now you’re now a bit better in the UX department. Even if the user double-clicks the line or tries to select all of it, they won’t get the $ screwing up the copy-paste.
Hopefully they aren’t equally frustrated by not being able to copy the $. 😬
So, anyway, something like this incredible design by me:
Fixing it with text
A lot of documentation for code-things are on a public git repo place like GitHub. You don’t have access to CSS to style what GitHub looks like, so while there is trickery available, you can’t just plop a line of CSS in there to style things.
We might have to (gasp) use our words:
<p>
Install the package by entering this
command at your terminal:
</p>
<kbd class="command">brew install package</kbd>
Other thoughts
You probably wouldn’t bother syntax highlighting it at all. I don’t think I’ve ever seen a terminal that syntax highlights commands as you enter them.
Eric Meyer suggested the <kbd> element which is the Keyboard Input element. I like that. I’ve long used <code> but I think <kbd> is more appropriate here.
Tim Chase suggested using a <span> and including the prompt in the HTML so you can style it uniquely if you want, including making it not selectable with user-select: none;.
Justin Searls has a dotfiles trick where if you accidently copy/paste the $, it just ignores it and runs everything after it.
Europe’s lead data protection regulator has opened two investigations into EU institutions’ use of cloud services from U.S. cloud giants, Amazon and Microsoft, under so called Cloud II contracts inked earlier between European bodies, institutions and agencies and AWS and Microsoft.
A separate investigation has also been opened into the European Commission’s use of Microsoft Office 365 to assess compliance with earlier recommendations, the European Data Protection Supervisor (EDPS) said today.
Wojciech Wiewiórowski is probing the EU’s use of U.S. cloud services as part of a wider compliance strategy announced last October following a landmark ruling by the Court of Justice (CJEU) — aka, Schrems II — which struck down the EU-US Privacy Shield data transfer agreement and cast doubt upon the viability of alternative data transfer mechanisms in cases where EU users’ personal data is flowing to third countries where it may be at risk from mass surveillance regimes.
In October, the EU’s chief privacy regulator asked the bloc’s institutions to report on their transfers of personal data to non-EU countries. This analysis confirmed that data is flowing to third countries, the EDPS said today. And that it’s flowing to the U.S. in particular — on account of EU bodies’ reliance on large cloud service providers (many of which are U.S.-based).
That’s hardly a surprise. But the next step could be very interesting as the EDPS wants to determine whether those historical contracts (which were signed before the Schrems II ruling) align with the CJEU judgement or not.
Indeed, the EDPS warned today that they may not — which could thus require EU bodies to find alternative cloud service providers in the future (most likely ones located within the EU, to avoid any legal uncertainty). So this investigation could be the start of a regulator-induced migration in the EU away from U.S. cloud giants.
Commenting in a statement, Wiewiórowski said: “Following the outcome of the reporting exercise by the EU institutions and bodies, we identified certain types of contracts that require particular attention and this is why we have decided to launch these two investigations. I am aware that the ‘Cloud II contracts’ were signed in early 2020 before the ‘Schrems II’ judgement and that both Amazon and Microsoft have announced new measures with the aim to align themselves with the judgement. Nevertheless, these announced measures may not be sufficient to ensure full compliance with EU data protection law and hence the need to investigate this properly.”
Amazon and Microsoft have been contacted with questions regarding any special measures they have applied to these Cloud II contracts with EU bodies.
The EDPS said it wants EU institutions to lead by example. And that looks important given how, despite a public warning from the European Data Protection Board (EDPB) last year — saying there would be no regulatory grace period for implementing the implications of the Schrems II judgement — there hasn’t been any major data transfer fireworks yet.
The most likely reason for that is a fair amount of head-in-the-sand reaction and/or superficial tweaks made to contracts in the hopes of meeting the legal bar (but which haven’t yet been tested by regulatory scrutiny).
Final guidance from the EDPB is also still pending, although the Board put out detailed advice last fall.
The CJEU ruling made it plain that EU law in this area cannot simply be ignored. So as the bloc’s data regulators start scrutinizing contracts that are taking data out of the EU some of these arrangement are, inevitably, going to be found wanting — and their associated data flows ordered to stop.
To wit: A long-running complaint against Facebook’s EU-US data transfers — filed by the eponymous Max Schrems, a long-time EU privacy campaigners and lawyer, all the way back in 2013 — is slowing winding toward just such a possibility.
How Facebook might respond is anyone’s guess but Schrems suggested to TechCrunch last summer that the company will ultimately need to federate its service, storing EU users’ data inside the EU.
The Schrems II ruling does generally look like it will be good news for EU-based cloud service providers which can position themselves to solve the legal uncertainty issue (even if they aren’t as competitively priced and/or scalable as the dominant US-based cloud giants).
Fixing U.S. surveillance law, meanwhile — so that it gets independent oversight and accessible redress mechanisms for non-citizens in order to no longer be considered a threat to EU people’s data, as the CJEU judges have repeatedly found — is certainly likely to take a lot longer than ‘months’. If indeed the US authorities can ever be convinced of the need to reform their approach.
Still, if EU regulators finally start taking action on Schrems II — by ordering high profile EU-US data transfers to stop — that might help concentrate US policymakers’ minds toward surveillance reform. Otherwise local storage may be the new future normal.
Most images on the web are superfluous. If I might be a jerk for a bit, 99% of them aren’t event that helpful at all (although there are rare exceptions). That’s because images don’t often complement the text they’re supposed to support and instead hurt users, taking forever to load and blowing up data caps like some sort of performance tax.
Thankfully, this is mostly a design problem today because making images performant and more user-friendly is so much easier than it once was. We have better image formats like WebP (and soon, perhaps, JPEG XL). We have the magic of responsive images of course. And there are tons of great tools out there, like ImageOptim, as well as resources such as Addy Osmani’s new book.
Although perhaps my favorite way to improve image performance is with lazy loading:
This image will only load when a user scrolls down the page so it’s visible to the user — which removes it from the initial page load and that’s just great! Making that initial load of a webpage lightning fast is a big deal.
But maybe there are images that should never load at all. Perhaps there are situations where it’d be better if a person could opt-into seeing it. Here’s one example: take the text-only version of NPR and click around for a bit. Isn’t it… just so great?! It’s readable! There’s no junk all over the place, it respects me as a user and — sweet heavens — is it fast.
Did I just show you an image in a blog post that insults the very concept of images? Yep! Sue me.
So! What if we could show images on a website but only once they are clicked or tapped? Wouldn’t it be neat if we could show a placeholder and swap it out for the real image on click? Something like this:
Well, I had two thoughts here as to how to build this chap (the golden rule is that there’s never one way to build anything on the web).
Method #1: Using <img> without a src attribute
We can remove the src attribute of an <img> tag to hide an image. We could then put the image file in an attribute, like data-src or something, just like this:
<img data-src="image.jpg" src="" alt="Photograph of hot air balloons by Musab Al Rawahi. 144kb">
By default, most browsers will show a broken image icon that you’re probably familiar with:
Okay, so it’s sort of accessible. I guess? You can see the alt tag rendered on screen automatically, but with a light dash of JavaScript, we can then swap out the src with that attribute:
Ugh. In some browsers there’ll be a tiny broken image icon in the bottom when the image hasn’t loaded. The problem here is that browsers don’t give you a way to remove the broken image icon with CSS (and we probably shouldn’t be allowed to anyway). It’s a bit annoying to style the alt text, too. But if we remove the alt attribute altogether, then the broken image icon is gone, although this makes the <img> unusable without JavaScript. So removing that alt text is a no-go.
As I said: Ugh. I don’t think there’s a way to make this method work (although please prove me wrong!).
Method #2: Using links to create an image
The other option we have is to start with the humble hyperlink, like this:
<a href="image.jpg">Photograph of hot air balloons by Musab Al Rawahi. 144kb<a>
Which, yep, nothing smart happening yet — this will just render a link to an image:
That works accessibility-wise, right? If we don’t have any JavaScript, then all we have is just a link that folks can optionally click. Performance-wise, it can’t get much faster than plain text!
But from here, we can reach for JavaScript to stop the link from loading on click, grab the href attribute within that link, create an image element and, finally, throw that image on the page and remove the old link once we’re done:
We could probably style this placeholder link to make it look a bit nicer than what I have below. But this is just an example. Go ahead and click the link to load the image again:
And there you have it! It isn’t groundbreaking or anything, and I’m sure someone’s done this before at some point or another. But if we wanted to be extremely radical about performance beyond the first paint and initial load, then I think this is an okay-ish solution. If we’re making a text-only website then I think this is definitely the way to go.
Perhaps we could also make a web component out of this, or even detect if someone has prefers-reduced-data and then only load images if someone has enough data or something. What do you think?
Serverless architectures have brought engineering teams a great number of benefits. We get simpler deployments, automatic and infinite scale, better concurrency, and a stateless API surface. It’s hard to imagine going back to the world of managed services, broken local environments, and SSHing into servers. When I started doing web development, moving from servers in a closet to rackspace was a revolution.
It’s not just hosting and how we deploy applications that have changed under this new paradigm. These advantages of serverless have presented challenges to the traditional MVC architecture that has been so ubiquitous. Thinking back to those early days, frameworks like Zend, Laravel, Django, and Rails were incredible productivity boosters. They not only influenced the way we built applications, but also the way we think about solving problems with the web. These were your “majestic monoliths” and they solidified the MVC pattern as the defacto standard for most of the web applications we use today.
In many ways, the rise of microservices and with it the idea of hexagonal architectures (aka ports and adaptors) led naturally to this new serverless world. It started as creating and hosting standalone APIs organized by a shared context that were still backed by the classic frameworks we already knew and loved.
The popularity of NodeJS led to the express framework where we now had a less rigid microframework enabling us to more flexibly organize our code. The final metamorphosis of this pattern is individual pieces of business logic that can be executed on demand in the cloud. We no longer manage machines or even multiple API servers. Each piece of business logic only exists when it is needed, for as long as it’s needed and no longer. They are lightweight, and you only pay for the individual parts of your application that get used.
Today, we hardly realize there is a server at all–even the terminology, serverless, is a misnomer designed to highlight just how far we’ve come from the days of setting up XAMP or VirtualBox and Vagrant. The benefits are clear, the hours saved, the headaches avoided, and the freedom to just solve business problems with code bring building software closer than ever to the simple act of writing prose.
The Problem
The classic MVC frameworks codified not only the pattern of working in three distinct tiers (data, application, and presentation) but also the technology for each. You were able to choose some options at each layer, for instance Postgres or MySQL as the data layer, but the general idea is these decisions are made for you. You implicitly adopt the idea of convention over configuration.
Postgres as a data layer solution makes a lot of sense. It’s robust, fast, supports ACID transactions, and has over thirty years of development behind it. It is also open-source, can be hosted almost anywhere, and is likely to be around for another thirty years. You could do much worse than stake your company’s future on Postgres. Add to that all the work put into integrating it into these equally battle-tested frameworks and the story for choosing Postgres becomes very strong.
However, when we enter a serverless context, this type of architecture presents a number of challenges particularly when it comes to handling our data.
Common issues include:
Maintaining stateful connections: when each user is a new connection to Postgres this can max out the number of connections Postgres can handle quickly.
Provisioned scale: with Postgres we must be sure to provision the right size database for our application ahead of time, which is not ideal when our application layer can automatically scale to any workload.
Traditional security model: this model does not allow for any client-side use and is vulnerable to SQL injection attacks.
Data centralization: while our application may be deployed globally, this is of little use when our database is stuck in a single location potentially thousands of miles from where the data needs to be.
High operational overhead: serverless promises to free us from complexity and remove barriers to solving business problems. With Postgres we return to needing to manage a service ourselves, dealing with sharding, scaling, distribution, and backups.
Traditional systems like Postgres were never designed for this purpose. To start, Postgres operates on the assumption of a stateful connection. What this means is that Postgres will hold open a connection with a server in order to optimize the response time. In a traditional monolithic application, if your server had to open a new connection every single time it requested data this would be quite inefficient. The actual network request would in many times be the primary bottleneck. By keeping this connection cached Postgres removes this bottleneck. As you scale your application you will likely have multiple machines running, and a single Postgres database can handle many such connections, but this number isn’t infinite. In fact, in many cases, you have to set this number at the time of provisioning the database.
In a serverless context, each request is effectively a new machine and a new connection to the database. As Postgres attempts to hold open these connections we can quickly run up against our connection limit and the memory limits of the machine. This also introduces another issue with the traditional Postgres use case, which is provisioned resources.
With Postgres we have to decide the size of the database, the capacity of the machine it runs on, where that machine is located, and the connection limit at the time of creation. This puts us in a situation where our application can scale automatically but we must watch our database closely and scale it ourselves. This can be even trickier when we are dealing with spikes in traffic that are not consistent in both time and location. Ultimately by moving to serverless we have reduced the operational overhead of our application layer, but created some increased operational overhead in our database. Would it not be better if both our application and our data layer could scale together without us having to manage it?
The complexity required to make traditional systems like Postgres work in a serverless environment can often be enough to abandon the architecture all together. Serverless requires on-demand, stateless execution of business logic. This allows us to create lighter, more scalable programs but does not allow us to preserve things like network connections and can be slowed down by additional dependencies like ORMs and middleware.
The Ideal Solution
It’s time we begin thinking about a new type of database, one more in line with the spirit of serverless and one that embraces iterative development and more unified tooling. We want this database to have the same automatic, on-demand scale as the rest of our application as well as handle global distribution that are hallmarks of the serverless promise. This ideal solution should be:
Support for stateless connections with no limits.
Auto-scaling both for the size of the machine and in the size of the database itself.
Be securely accessible from both the client and the server to support both serverless APIs as well as Jamstack use cases.
Globally distributed so data is closest to where it is needed always.
Free of operational overhead so we don’t add complexity managing things like sharding, distribution, and backups.
If we are truly to embrace the serverless architecture, we need to ensure that our database scales along with the rest of the application. In this case, we have a variety of solutions some of which involve sticking with Postgres. Amazon Aurora is one example of a Postgres cloud solution that gives us automatic scalability and backups, and gives us some global distribution of data. However, Amazon Aurora is hardly easy to set up and doesn’t free us from all operational overhead. We also are not able to securely access our data from the client without building an accompanying API as it still follows the traditional Postgres security model.
Another option here are services like Hasura, that allow us to leverage Postgres but access our data by way of a GraphQL API. This solves our security issues when accessing data from the client and gets us much closer to the ease of use we have with many of our other serverless services. However, we are left to manage our database ourselves and this merely adds another layer on top of our database to manage the security. While the Hasura application layer is distributed, our database is not so we don’t get true global distribution with this system.
I think at this point we should turn toward some additional solutions that really hit all the points above. When looking at solutions outside of Postgres we have to add two additional requirements that put the solutions on par with the power of Postgres:
Support for robust, distributed, ACID transactions.
Support for relational modeling such that we can easily perform join operations on normalized data.
When we typically step outside of relational database systems into the world of schemaless solutions, ACID transactions and relational, normalized data are often things we sacrifice. So we want to make sure that when we optimize for serverless we are not losing the features that have made Postgres such a strong contender for so long.
Azure’s CosmosDB supports a variety of databases (both SQL and NoSQL) under the hood. CosmosDB also provides us with libraries that can work on both the client and server freeing us from an additional dependency like Hasura. We get some global distribution as well as automatic scale. However, we are still left with a lot of choices to make and are not free entirely from database management. We still have to manage our database size effectively and choose from many database options that all have their pros and cons.
What we really want is a fully managed solution where the operational overhead of choosing database size and the type of database can be abstracted away. In a general sense having to research many types of databases and estimate scale would be things that matter a lot less if we have all of the features we need. Fauna is a solution where we don’t have to worry about the size of the database nor do we have to select the type of database under the hood. We get the support of ACID transactions, global distribution, and no data loss without having to figure out the best underlying technology to achieve that. We also can freely access our database on the client or the server with full support for serverless functions. This allows us to flexibly create different types of applications in a variety of architectures such as JAMstack clients, serverless APIs, traditional long-running backends, or combinations of these styles.
When it comes to schemaless databases, we gain flexibility but are forced to think differently about our data model to most efficiently query our data. When moving away from Postgres this is often a subtle but large point of friction. With Fauna, we have to move into a schemaless design as you cannot opt into another database type. However, Fauna makes use of a unique document-relational database model. This allows us to utilize relational database knowledge and principles when modeling our data into collections and indexes. This is why I think it’s worth considering for people used to Postgres as the mental overhead is not the same as with other NoSql options.
Conclusion
Systems like Postgres have been powerful allies for building applications for over thirty years. The rise of agile and iterative development cycles led us to the serverless revolution before us today. We are able to build increasingly more complex applications with less operational overhead than ever before. That power requires us to think about databases differently and demand a similar level of flexibility and ease of management. We want to preserve the best qualities of a Postgres, like ACID transactions, but ditch the more unsavory aspects of working with the database like connection pooling, provisioning resources, security, distribution and managing scale, availability and reliability.
Solutions such as Amazon’s Aurora Serverless v2 create a serverless solution that works in this new world. There are also solutions like Hasura that sit on top of this to further fulfill the promise of serverless. We also have solutions like Cosmos DB and Fauna that are not based in Postgres but are built for serverless while supporting important Postgres functionality.
While Cosmos DB gives us a lot of flexibility in terms of what database we use, it still leaves us with many decisions and is not completely free of operational overhead. Fauna has made it so you don’t have to compromise on ACID transactions, relational modeling or normalized data — while still alleviating all the operational overhead of database management. Fauna is a complete rethinking of a database that is truly free of operational overhead. By combining the best of the past with the needs of the future Fauna has built a solution that behaves more like a data API and feels natural in a serverless future.
In his last An Event Apart talk, Dave made a point that it’s really only just about right now that Web Components are becoming a practical choice for production web development. For example, it has only been about a year since Edge went Chromium. Before that, Edge didn’t support any Web Component stuff. If you were shipping them long ago, you were doing so with fairly big polyfills, or in a progressive-enhancement style, where if they failed, they did so gracefully or in a controlled environment, say, an intranet where everyone has the same computer (or in something like Electron).
In my opinion, Web Components still have a ways to go to be compelling to most developers, but they are getting there. One thing that I think will push their adoption along is the incredibly easy DX of pre-built components thanks to, in part, ES Modules and how easy it is to import JavaScript.
I’ve mentioned this one before: look how silly-easy it is to use Nolan Lawson’s emoji picker:
That’s one line of JavaScript and one line of HTML to get it working, and another one line of JavaScript to wire it up and return a JSON response of a selection.
Compelling, indeed. DX, you might call it.
Web Components like that aren’t alone, hence the title of this post. Dave put together a list of Awesome Standalones. That is, Web Components that aren’t a part of some bigger more complex system1, but are just little drop-in doodads that are useful on their own, just like the emoji picker. Dave’s repo lists about 20 of them.
Pretty sweet for something that comes across the wire at ~3KB. The production story is whatever you want it to be. Use it off the CDN. Bundle it up with your stuff. Self-host it be leave it a one-off. Whatever. It’s darn easy to use. In the case of this standalone, there isn’t even any Shadow DOM to deal with.
No shade on Shadow DOM, that’s perhaps the most useful feature of Web Components (and cannot be replicated by a library since it’s a native browser feature), but the options for styling it aren’t my favorite. And if you used three different standalone components with three different opinions on how to style through the Shadow DOM, that’s going to get annoying.
What I picture is developers dipping their toes into stuff like this, seeing the benefits, and using more and more of them in what they are building, and even building their own. Building a design system from Web Components seems like a real sweet spot to me, like many big names2 already do.
The dream is for people to actually consolidate common UI patterns. Like, even if we never get native HTML “tabs” it’s possible that a Web Component could provide them, get the UI, UX, and accessibility perfect, yet leave them style-able such that literally any website could use them. But first, that needs to exist.
That’s a cool way to use Web Components, too, but easy gets attention, and that matters. ⮑
People always mention Lightning Design System as a Web Components-based design system, but I’m not seeing it. For example, this accordion looks like semantic HTML with class names, not Web Components. What am I missing? ⮑