> All in One 586: January 2020

Ads

Friday, January 31, 2020

Partly Cloudy today!



With a high of F and a low of 26F. Currently, it's 32F and Clear outside.

Current wind speeds: 10 from the Northwest

Pollen: 0

Sunrise: January 31, 2020 at 07:59PM

Sunset: February 1, 2020 at 06:10AM

UV index: 0

Humidity: 52%

via https://ift.tt/2livfew

February 1, 2020 at 10:00AM

Customer feedback is a development opportunity

Online commerce accounted for nearly $518 billion in revenue in the United States alone last year. The growing number of online marketplaces like Amazon and eBay will command 40% of the global retail market in 2020. As the number of digital offerings — not only marketplaces but also online storefronts and company websites — available to consumers continues to grow, the primary challenge for any online platform lies in setting itself apart.

The central question for how to accomplish this: Where does differentiation matter most?

A customer’s ability to easily (and accurately) find a specific product or service with minimal barriers helps ensure they feel satisfied and confident with their choice of purchase. This ultimately becomes the differentiator that sets an online platform apart. It’s about coupling a stellar product with an exceptional experience. Often, that takes the form of simple, searchable access to a wide variety of products and services. Sometimes, it’s about surfacing a brand that meets an individual consumer’s needs or price point. In both cases, platforms are in a position to help customers avoid having to chase down a product or service through multiple clicks while offering a better way of comparing apples to apples.

To be successful, a company should adopt a consumer-first philosophy that informs its product ideation and development process. A successful consumer-first development resides in a company’s ability to expediently deliver fresh features that customers actually respond to, rather than prioritize the update that seems most profitable. The best way to inform both elements is to consistently collect and learn from customer feedback in a timely way — and sometimes, this will mean making decisions for the benefit of consumers versus what is in the best interest of companies.



from Amazon – TechCrunch https://ift.tt/36OIB3I
via IFTTT

Parker Solar Probe

It will get within 9 or 10 Sun-diameters of the "bottom" (the Sun's surface) which seems pretty far when you put it that way, but from up here on Earth it's practically all the way down.

from xkcd.com https://xkcd.com/2262/
via IFTTT

Innovation Can’t Keep the Web Fast

Every so often, the fruits of innovation bear fruit in the form of improvements to the foundational layers of the web. In 2015, HTTP/2 became a published standard in an effort to update an aging protocol. This was was both necessary and overdue, as HTTP/1 rendered web performance as an arcane sort of discipline in the form of strange workarounds of its limitations. Though HTTP/2 proliferation isn’t absolute — and there are kinks yet to be worked out — I don’t think it’s a stretch to say the web is better off because of it.

Unfortunately, the rollout of HTTP/2 has presided over a 102% median increase of bytes transferred over mobile the last four years. If we look at the 90th percentile of that same dataset — because it’s really the long tail of performance we need to optimize for — we see an increase of 239%. From 2016 (PDF warning) to 2019, the average mobile download speed in the U.S. has increased by 73%. In Brazil and India, average mobile download speeds increased by 75% and 28%, respectively, in that same period of time.

While page weight alone doesn’t necessarily tell the whole story of the user experience, it is, at the very least, a loosely related phenomenon which threatens the collective user experience. The story that HTTPArchive tells through data acquired from the Chrome User Experience Export (CrUX) can be interpreted a number of different ways, but this one fact is steadfast and unrelenting: most metrics gleaned from CrUX over the last couple of years show little, if any improvement despite various improvements in browsers, the HTTP protocol, and the network itself.

Given these trends, all that can be said of the impact of these improvements at this point is that it has helped to stem the tide of our excesses, but precious little to reduce them. Despite every significant improvement to the underpinnings of the web and the networks we access it through, we continue to build for it in ways that suggest we’re content with the never-ending Jevons paradox in which we toil.

If we’re to make progress in making a faster web for everyone, we must recognize some of the impediments to that goal:

  1. The relentless desire to monetize every square inch of the web, as well as the army of third party vendors which fuel the research mandated by such fevered efforts.
  2. Workplace cultures that favor unrestrained feature-driven development. This practice adds to — but rarely takes away from — what we cram down the wire to users.
  3. Developer conveniences that make the job of the developer easier, but can place an increasing cost on the client.

Counter-intuitively, owners of mature codebases which embody some or all of these traits continue to take the same unsustainable path to profitability they always have. They do this at their own peril, rather than acknowledge the repeatedly established fact that performance-first development practices will do as much — or more — for their bottom line and the user experience.

It’s with this understanding that I’ve come to accept that our current approach to remedy poor performance largely consists of engineering techniques that stem from the ill effects of our business, product management, and engineering practices. We’re good at applying tourniquets, but not so good at sewing up deep wounds.

It’s becoming increasingly clear that web performance isn’t solely an engineering problem, but a problem of people. This is an unappealing assessment in part because technical solutions are comparably inarguable. Content compression works. Minification works. Tree shaking works. Code splitting works. They’re undeniably effective solutions to what may seem like entirely technical problems.

The intersection of web performance and people, on the other hand, is messy and inconvenient. Unlike a technical solution as clearly beneficial as HTTP/2, how do we qualify what successful performance cultures look like? How do we qualify successful approaches to get there? I don’t know exactly what that looks like, but I believe a good template is the following marriage of cultural and engineering tenets:

  1. An organization can’t be successful in prioritizing performance if it can’t secure the support of its leaders. Without that crucial element, it becomes extremely difficult for organizations to create a culture in which performance is the primary feature of their product.
  2. Even with leadership support, performance can’t be effectively prioritized if the telemetry isn’t in place to measure it. Without measurement, it becomes impossible to explain how product development affects performance. If you don’t have the numbers, no one will care about performance until it becomes an apparent crisis.
  3. When you have the support of leadership to make performance a priority and the telemetry in place to measure it, you still can’t get there unless your entire organization understands web performance. This is the time at which you develop and roll out training, documentation, best practices, and standards the organization can embrace. In some ways, this is the space which organizations have already spent a lot of time in, but the challenging work is in establishing feedback loops to assess how well they understand and have applied that knowledge.
  4. When all of the other pieces are finally in place, you can start to create accountability in the organization around performance. Accountability doesn’t come in the form of reprisals when your telemetry tells you performance has suffered over time, but rather in the form of guard rails put in place in the deployment process to alert you when thresholds have been crossed.

Now comes the kicker: even if all of these things come together in your workplace, good outcomes aren’t guaranteed. Barring some regulation that forces us to address the poorly performing websites in our charge — akin to how the ADA keeps us on our toes with regard to accessibility — it’s going to take continuing evangelism and pressure to ensure performance remains a priority. Like so much of the work we do on the web, the work of maintaining a good user experience in evolving codebases is never done. I hope 2020 is the year that we meaningfully recognize that performance is about people, and adapt accordingly.

As technological innovations such as HTTP/3 and 5G emerge, we must take care not to rest on our laurels and simply assume they will heal our ills once and for all. If we do, we’ll certainly be having this discussion again when the successors to those technologies loom. Innovation alone can’t keep the web fast because making the web fast — and keeping it that way — is the hard work we can only accomplish by working together.

The post Innovation Can’t Keep the Web Fast appeared first on CSS-Tricks.



from CSS-Tricks https://ift.tt/31bVlAl
via IFTTT

Smaller HTML Payloads with Service Workers

Short story: Philip Walton has a clever idea for using service workers to cache the top and bottom of HTML files, reducing a lot of network weight.

Longer thoughts: When you're building a really simple website, you can get away with literally writing raw HTML. It doesn't take long to need a bit more abstraction than that. Even if you're building a three-page site, that's three HTML files, and your programmer's mind will be looking for ways to not repeat yourself. You'll probably find a way to "include" all the stuff at the top and bottom of the HTML, and just change the content in the middle.

I have tended to reach for PHP for that sort of thing in the past (<?php include('header.php); ?>), although these days I'm feeling much more jamstacky and I'd probably do it with Eleventy and Nunjucks.

Or, you could go down the SPA (Single Page App) route just for this basic abstraction if you want. Next and Nuxt are perhaps a little heavy-handed for a few includes, but hey, at least they are easy to work with and the result is a nice static site. The thing about these JavaScript-powered SPA frameworks (Gatsby is in here, too), is that they "hydrate" from static sites into SPAs as the JavaScript loads. Part of the reason for that is speed. No longer does the browser need to reload and request a whole big HTML page again to render; it just asks for whatever smaller amount of data it needs and replaces it on the fly.

So in a sense, you might build a SPA because you have a common header and footer and just want to replace the guts, for efficiencies sake.

Here's Phil:

In a traditional client-server setup, the server always needs to send a full HTML page to the client for every request (otherwise the response would be invalid). But when you think about it, that’s pretty wasteful. Most sites on the internet have a lot of repetition in their HTML payloads because their pages share a lot of common elements (e.g. the <head>, navigation bars, banners, sidebars, footers etc.). But in an ideal world, you wouldn’t have to send so much of the same HTML, over and over again, with every single page request.

With service workers, there’s a solution to this problem. A service worker can request just the bare minimum of data it needs from the server (e.g. an HTML content partial, a Markdown file, JSON data, etc.), and then it can programmatically transform that data into a full HTML document.

So rather than PHP, Eleventy, a JavaScript framework, or any other solution, Phil's idea is that a service worker (a native browser technology) can save a cache of a site's header and footer. Then server requests only need to be made for the "guts" while the full HTML document can be created on the fly.

It's a super fancy idea, and no joke to implement, but the fact that it could be done with less tooling might be appealing to some. On Phil's site:

 on this site over the past 30 days, page loads from a service worker had a 47.6% smaller network payloads, and a median First Contentful Paint (FCP) that was 52.3% faster than page loads without a service worker (416ms vs. 851ms).

Aside from configuring a service worker, I'd think the most finicky part is having to configure your server/API to deliver a content-only version of your stuff or build two flat file versions of everything.

Direct Link to ArticlePermalink

The post Smaller HTML Payloads with Service Workers appeared first on CSS-Tricks.



from CSS-Tricks https://ift.tt/36wgZ3Z
via IFTTT

Lightning-Fast Web Performance

If you're interested in leveling up your knowledge and skill of web performance, you can't do better than learning directly from Scott Jehl.

Direct Link to ArticlePermalink

The post Lightning-Fast Web Performance appeared first on CSS-Tricks.



from CSS-Tricks https://ift.tt/36hgSIF
via IFTTT

After earnings, Amazon joins the $1T club as Alphabet dips out

American tech companies almost did something neat today before messing it up.

After reporting earnings yesterday, Amazon’s shares shot higher this morning, pushing the company’s value north of $1 trillion. Its growth and profits proved toothsome to the investing classes, bolstering the Seattle area’s tech pedigree by adding a second trillion-dollar business to its rolls.

Microsoft and Apple, also flush after reporting their own well-received earnings, are also worth north of $1 trillion apiece. Amazon’s ascension would have brought the group of trillion-dollar American tech shops to four, if Alphabet hadn’t gone and spoiled the fun.

Here’s the chart, on which you can spot Alphabet’s dip back under the $1,000 billion mark:

MSFT Market Cap Chart

So close, right?

Perhaps Google and its cadre of money-losing subsidiaries will manage to skate back over $1 trillion today, leaving only little Facebook out of the Cool Kid Clubhouse.

Get it together, Zuck! A billion dollars isn’t cool. You know what is? Being yet another trillion-dollar tech company. Gosh.



from Amazon – TechCrunch https://ift.tt/3b2ox0W
via IFTTT

Even as Microsoft Azure revenue grows, AWS’s market share lead stays strong

When analyzing the cloud market, there are many ways to look at the numbers; revenue, year-over-year or quarter-over-quarter growth — or lack of it — or market share. Each of these numbers tells a story, but in the cloud market, where aggregate growth remains high and Azure’s healthy expansions continues, it’s still struggling to gain meaningful ground on AWS’s lead.

This has to be frustrating to Microsoft CEO Satya Nadella, who has managed to take his company from cloud wannabe to a strong second place in the IaaS/PaaS market, yet still finds his company miles behind the cloud leader. He’s done everything right to get his company to this point, but sometimes the math just isn’t in your favor.

Numbers don’t lie

John Dinsdale, chief analyst at Synergy Research, says Microsoft’s growth rate is higher overall than Amazon’s, but AWS still has a big lead in market share. “In absolute dollar terms, it usually has larger increments in revenue numbers and that makes Amazon hard to catch,” he says, adding “what I can say is that this is a very tough gap to close and mathematically it could not happen any time soon, whatever the quarterly performance of Microsoft and AWS.”

The thing to remember with the cloud market is that it’s not even close to being a fixed pie. In fact, it’s growing rapidly and there’s still plenty of market share left to win. As of today, before Amazon has reported, it has a substantial lead, no matter how you choose to measure it.



from Amazon – TechCrunch https://ift.tt/2RLkPBo
via IFTTT

Thursday, January 30, 2020

Partly Cloudy/Wind today!



With a high of F and a low of 19F. Currently, it's 31F and Partly Cloudy/Wind outside.

Current wind speeds: 21 from the Northwest

Pollen: 0

Sunrise: January 30, 2020 at 08:00PM

Sunset: January 31, 2020 at 06:09AM

UV index: 0

Humidity: 73%

via https://ift.tt/2livfew

January 31, 2020 at 10:00AM

Amazon quietly publishes its latest transparency report

Just as Amazon was basking in the news of a massive earnings win, the tech giant quietly published — as it always does — its latest transparency report, revealing a slight dip in the number of government demands for user data.

It’s a rarely seen decline in the number of demands received by a tech company during a year where almost every other tech giant — including Facebook, Google, Microsoft and Twitter — all saw an increase in the number of demands they receive. Only Apple reported a decline in the number of demands it received.

Amazon said it received 1,841 subpoenas, 440 search warrants and 114 other court orders for user data — such as its Echo and Fire devices — during the six-month period ending 2019.

That’s about a 4% decline on the first six months of the year.

The company’s cloud unit, Amazon Web Services, also saw a decline in the number of demands for data stored by customers, down by about 10%.

Amazon also said it received between 0 and 249 national security requests for both its consumer and cloud services (rules set out by the Justice Department only allow tech and telecom companies to report in ranges).

At the time of writing, Amazon has not yet updated its law enforcement requests page to list the latest report.

Amazon’s biannual transparency report is one of the lightest reads of any company’s figures across the tech industry. We previously reported on how Amazon’s transparency reports have purposefully become more vague over the years rather than clearer — bucking the industry trend. At just three pages, the company spends most of it explaining how it responds to each kind of legal demand rather than expanding on the numbers themselves.

The company’s Ring smart camera division, which has faced heavy criticism for its poor security practices and its cozy relationship with law enforcement, still hasn’t released its own data demand figures.



from Amazon – TechCrunch https://ift.tt/2GzHKcF
via IFTTT

Sticky Table of Contents with Scrolling Active States

Say you have a two-column layout: a main column with content. Say it has a lot of content, with sections that requires scrolling. And let's toss in a sidebar column that is largely empty, such that you can safely put a position: sticky; table of contents over there for all that content in the main column. A fairly common pattern for documentation.

Bramus Van Damme has a nice tutorial on all this, starting from semantic markup, implementing most of the functionality with HTML and CSS, and then doing the last bit of active nav enhancement with JavaScript.

For example, if you don't click yourself down to a section (where you might be able to get away with :target styling for active navigation), JavaScript is necessary to tell where you are scrolled to an highlight the active navigation. That active bit is handled nicely with IntersectionObserver, which is, like, the perfect API for this.

Here's that result:

It reminds me of a very similar demo from Hakim El Hattab he called Progress Nav. The design pattern is exactly the same, but Hakim's version has this ultra fancy SVG path that draws itself along the way, indenting for sub nav. I'll embed a video here:

That one doesn't use IntersectionObserver, so if you want to hack on this, combine 'em!

The post Sticky Table of Contents with Scrolling Active States appeared first on CSS-Tricks.



from CSS-Tricks https://ift.tt/36KsXpT
via IFTTT

“resize: none;” on textareas is bad UX

Catalin Rosu:

Sometimes you need to type a long reply that consists of many paragraphs and wrapping that text within a tiny textarea box makes it hard to understand and to follow as you type. There were many times when I had to write that text within Notepad++ for example and then just paste the whole reply in that small textarea. I admit I also opened the DevTools to override the resize: none declaration but that’s not really a productive way to do things.

Removing the default reliability of a <textarea> is generally user-hurting vanity. Even if the resized textarea "breaks" the site layout, too bad, the user is trying to do something very important on this site right now and you should not let anything get in the way of that. I know the web is a big place though, so feel free to prove me wrong in the comments.

This must have been cathartic for Catalin, who has been steadily gaining a reputation on Stack Overflow for an answer on how to prevent a textarea from resizing from almost a decade ago.

Direct Link to ArticlePermalink

The post “resize: none;” on textareas is bad UX appeared first on CSS-Tricks.



from CSS-Tricks https://ift.tt/2PNxIZA
via IFTTT

Ring’s new security ‘control center’ isn’t nearly enough

On the same day that a Mississippi family is suing Amazon-owned smart camera maker Ring for not doing enough to prevent hackers from spying on their kids, the company has rolled out its previously announced “control center,” which it hopes will make you forget about its verifiably “awful” security practices.

In a blog post out Thursday, Ring said the new “control center,” “empowers” customers to manage their security and privacy settings.

Ring users can check to see if they’ve enabled two-factor authentication, add and remove users from the account, see which third-party services can access their Ring cameras, and opt-out of allowing police to access their video recordings without the user’s consent.

But dig deeper and Ring’s latest changes still do practically nothing to change some of its most basic, yet highly criticized security practices.

Questions were raised over these practices months ago after hackers were caught breaking into Ring cameras and remotely watching and speaking to small children. The hackers were using previously compromised email addresses and passwords — a technique known as credential stuffing — to break into the accounts. Some of those credentials, many of which were simple and easy to guess, were later published on the dark web.

Yet, Ring still has not done anything to mitigate this most basic security problem.

TechCrunch ran several passwords through Ring’s sign-up page and found we could enter any easy to guess password, like “12345678” and “password” — which have consistently ranked as some of the most common passwords for several years running.

To combat the problem, Ring said at the time users should enable two-factor authentication, a security feature that adds an additional check to prevent account breaches like password spraying, where hackers use a list of common passwords in an effort to brute force their way into accounts.

But Ring still uses a weak form of two-factor, sending you a code by text message. Text messages are not secure and can be compromised through interception and SIM swapping attacks. Even NIST, the government’s technology standards body, has deprecated support for text message-based two-factor. Experts say although text-based two-factor is better than not using it at all, it’s far less secure than app-based two-factor, where codes are delivered over an encrypted connection to an app on your phone.

Ring said it’ll make its two-factor authentication feature mandatory later this year, but has yet to say if it will ever support app-based two-factor authentication in the future.

The smart camera maker has also faced criticism for its cozy relationship with law enforcement, which has lawmakers concerned and demanding answers.

Ring allows police access to users’ videos without a subpoena or a warrant. (Unlike its parent company Amazon, Ring still does not published the number of times police demand access to customer videos, with or without a legal request.)

Ring now says its control center will allow users to decide if police can access their videos or not.

But don’t be fooled by Ring’s promise that police “cannot see your video recordings unless you explicitly choose to share them by responding to a specific video request.” Police can still get a search warrant or a court order to obtain your videos, which isn’t particularly difficult if police can show there’s reasonable grounds that it may contain evidence — such as video footage — of a crime.

There’s nothing stopping Ring, or any other smart home maker, from offering a zero-knowledge approach to customer data, where only the user has the encryption keys to access their data. Ring cutting itself (and everyone else) out of the loop would be the only meaningful thing it could do if it truly cares about its users’ security and privacy. The company would have to decide if the trade-off is worth it — true privacy for its users versus losing out on access to user data, which would effectively kill its ongoing cooperation with police departments.

Ring says that security and privacy has “always been our top priority.” But if it’s not willing to work on the basics, its words are little more than empty promises.



from Amazon – TechCrunch https://ift.tt/2U9y6oZ
via IFTTT

AWS partners with sports leagues to change how we watch games

Since the inception of professional sports, fans have sought statistics about how their favorite teams and players are performing. Until recently, these stats were generated from basic counting, like batting averages, home runs or touchdowns.

Today, sports leagues are looking to learn more about players and find a competitive edge through more advanced stats. Beyond that, they want to engage fans more with tools like AWS NFL’s Next Gen Stats and MLB’s Statcast, software that uses compelling visuals to illustrate statistics like the probability of receiving a catch in the end zone or a runner’s speed between home and first base.

AWS counts Major League Baseball, the National Football League, the German Bundesliga soccer league, NASCAR, Formula 1 racing and Six Countries Rugby among its customers. How, exactly, are advanced cloud technology and machine learning helping change how we watch live sports?

Building on Moneyball



from Amazon – TechCrunch https://ift.tt/38RB1qn
via IFTTT

Understanding Immutability in JavaScript

If you haven’t worked with immutability in JavaScript before, you might find it easy to confuse it with assigning a variable to a new value, or reassignment. While it’s possible to reassign variables and values declared using let or var, you'll begin to run into issues when you try that with const.

Say we assign the value Kingsley to a variable called firstName:

let firstName = "Kingsley";

We can reassign a new value to the same variable,

firstName = "John";

This is possible because we used let. If we happen to use const instead like this:

const lastName = "Silas";

…we will get an error when we try to assign it to a new value;

lastName = "Doe"
// TypeError: Assignment to constant variable.

That is not immutability.

An important concept you’ll hear working with a framework, like React, is that mutating states is a bad idea. The same applies to props. Yet, it is important to know that immutability is not a React concept. React happens to make use of the idea of immutability when working with things like state and props.

What the heck does that mean? That’s where we're going to pick things up.

Mutability is about sticking to the facts

Immutable data cannot change its structure or the data in it. It’s setting a value on a variable that cannot change, making that value a fact, or sort of like a source of truth — the same way a princess kisses a frog hoping it will turn into a handsome prince. Immutability says that frog will always be a frog.

[ILLUSTRATION]

Objects and arrays, on the other hand, allow mutation, meaning the data structure can be changed. Kissing either of those frogs may indeed result in the transformation of a prince if we tell it to.

[ILLUSTRATION]

Say we have a user object like this:

let user = { name: "James Doe", location: "Lagos" }

Next, let’s attempt to create a newUser object using those properties:

let newUser = user

Now let’s imagine the first user changes location. It will directly mutate the user object and affect the newUser as well:

user.location = "Abia"
console.log(newUser.location) // "Abia"

This might not be what we want. You can see how this sort of reassignment could cause unintended consequences.

Working with immutable objects

We want to make sure that our object isn’t mutated. If we’re going to make use of a method, it has to return a new object. In essence, we need something called a pure function.

A pure function has two properties that make it unique:

  1. The value it returns is dependent on the input passed. The returned value will not change as long as the inputs do not change.
  2. It does not change things outside of its scope.

By using Object.assign(), we can create a function that does not mutate the object passed to it. This will generate a new object instead by copying the second and third parameters into the empty object passed as the first parameter. Then the new object is returned.

const updateLocation = (data, newLocation) => {
    return {
      Object.assign({}, data, {
        location: newLocation
    })
  }
}

updateLocation() is a pure function. If we pass in the first user object, it returns a new user object with a new value for the location.

Another way to go is using the Spread operator:

const updateLocation = (data, newLocation) => {
  return {
    ...data,
    location: newLocation
  }
}

OK, so how does this all of this fit into React? Let’s get into that next.

Immutability in React

In a typical React application, the state is an object. (Redux makes use of an immutable object as the basis of an application’s store.) React’s reconciliation process determines if a component should re-render or if it needs a way to keep track of the changes.

In other words, if React can’t figure out that the state of a component has changed, then it will not not know to update the Virtual DOM.

Immutability, when enforced, makes it possible to keep track of those changes. This allows React to compare the old state if an object with it’s new state and re-render the component based on that difference.

This is why directly updating state in React is often discouraged:

this.state.username = "jamesdoe";

React will not be sure that the state has changed and is unable to re-render the component.

Immutable.js

Redux adheres to the principles of immutability. Its reducers are meant to be pure functions and, as such, they should not mutate the current state but return a new object based on the current state and action. We’d typically make use of the spread operator like we did earlier, yet it is possible to achieve the same using a library called Immutable.js.

While plain JavaScript can handle immutability, it’s possible to run into a handful of pitfalls along the way. Using Immutable.js guarantees immutability while providing a rich API that is big on performance. We won’t be going into all of the fine details of Immutability.js in this piece, but we will look at a quick example that demonstrates using it in a to-do application powered by React and Redux.

First, lets’ start by importing the modules we need and set up the Todo component while we’re at it.


const { List, Map } = Immutable;
const { Provider, connect } = ReactRedux;
const { createStore } = Redux;

If you are following along on your local machine. you’ll need to have these packages installed:

npm install redux react-redux immutable 

The import statements will look like this.

import { List, Map } from "immutable";
import { Provider, connect } from "react-redux";
import { createStore } from "redux";

We can then go on to set up our Todo component with some markup:

const Todo = ({ todos, handleNewTodo }) => {
  const handleSubmit = event => {
    const text = event.target.value;
    if (event.which === 13 && text.length > 0) {
      handleNewTodo(text);
      event.target.value = "";
    }
  };

  return (
    <section className="section">
      <div className="box field">
        <label className="label">Todo</label>
        <div className="control">
          <input
            type="text"
            className="input"
            placeholder="Add todo"
            onKeyDown={handleSubmit}
          />
        </div>
      </div>
      <ul>
        {todos.map(item => (
          <div key={item.get("id")} className="box">
            {item.get("text")}
          </div>
        ))}
      </ul>
    </section>
  );
};

We’re using the handleSubmit() method to create new to-do items. For the purpose of this example, the user will only be create new to-do items and we only need one action for that:

const actions = {
  handleNewTodo(text) {
    return {
      type: "ADD_TODO",
      payload: {
        id: uuid.v4(),
        text
      }
    };
  }
};

The payload we’re creating contains the ID and the text of the to-do item. We can then go on to set up our reducer function and pass the action we created above to the reducer function:

const reducer = function(state = List(), action) {
  switch (action.type) {
    case "ADD_TODO":
      return state.push(Map(action.payload));
    default:
      return state;
  }
};

We’re going to make use of connect to create a container component so that we can plug into the store. Then we’ll need to pass in mapStateToProps() and mapDispatchToProps() functions to connect.

const mapStateToProps = state => {
  return {
    todos: state
  };
};

const mapDispatchToProps = dispatch => {
  return {
    handleNewTodo: text => dispatch(actions.handleNewTodo(text))
  };
};

const store = createStore(reducer);

const App = connect(
  mapStateToProps,
  mapDispatchToProps
)(Todo);

const rootElement = document.getElementById("root");

ReactDOM.render(
  <Provider store={store}>
    <App />
  </Provider>,
  rootElement
);

We’re making use of mapStateToProps() to supply the component with the store’s data. Then we’re using mapDispatchToProps() to make the action creators available as props to the component by binding the action to it.

In the reducer function, we make use of List from Immutable.js to create the initial state of the app.

const reducer = function(state = List(), action) {
  switch (action.type) {
    case "ADD_TODO":
      return state.push(Map(action.payload));
    default:
      return state;
  }
};

Think of List as a JavaScript array, which is why we can make use of the .push() method on state. The value used to update state is an object that goes on to say that Map can be recognized as an object. This way, there’s no need to use Object.assign() or the spread operator, as this guarantees that the current state cannot change. This looks a lot cleaner, especially if it turns out that the state is deeply nested — we do not need to have spread operators sprinkled all over


Immutable states make it possible for code to quickly determine if a change has occurred. We do not need to do a recursive comparison on the data to determine if a change happened. That said, it’s important to mention that you might run into performance issues when working with large data structures — there’s a price that comes with copying large data objects.

But data needs to change because there’s otherwise no need for dynamic sites or applications. The important thing is how the data is changed. Immutability provides the right way to change the data (or state) of an application. This makes it possible to trace the state’s changes and determine what the parts of the application should re-render as a result of that change.

Learning about immutability the first time will be confusing. But you’ll become better as you bump into errors that pop up when the state is mutated. That’s often the clearest way to understand the need and benefits of immutability.

Further reading

The post Understanding Immutability in JavaScript appeared first on CSS-Tricks.



from CSS-Tricks https://ift.tt/36GdsPY
via IFTTT

Techstars Detroit accelerator is shutting down

Techstars Detroit, the accelerator that has funded 54 startups in the past five years, is shutting down, TechCrunch has learned.

In an email to supporters, Techstars Detroit managing director Ted Serbinski said the accelerator was not able to secure enough funding for 2020.

“It’s clear the entire automotive mobility industry is tightening as sales slump and we hit the trough of disillusionment with autonomy,” Serbinski wrote in the email. The sales and business development piece of the accelerator is working to build a new program in Detroit if “great corporates can be found,” he added.

Techstars isn’t disappearing from Detroit altogether. The company has a presence through events like Startup Week and Startup Weekends. Serbinski will continue to support the 54 startups that have come out of the program. A number of these startups are working on Series A rounds.

Serbinski will continue to work at Techstars, this time running an accelerator program focused on “quality of life” startups.

An excerpt from Serbinksi’s email:

An experiment for Techstars, Detroit showed you could build a world-class program in an emerging market, in a hyper-competitive industry, that was going through a transformational change.

More importantly, the program proved that wonderful and talented mentors from around the region and globe would graciously support the founders. Truly, an incredible community formed around this program and region. It’s wonderful to see all the new activity as Detroit continues to grow in startup and VC activity.

Techstars Detroit began in 2015 as Techstars Mobility, a mentorship-driven accelerator program that was supported by numerous corporate and auto-focused backers including Ford, Honda, Lear and Nationwide as well as global partners such as Amazon’s AWS, Silicon Valley Bank and Microsoft for Startups. The intent was to bring attention and business into Detroit, a strategy that Serbinksi told TechCrunch was successful.

“The Detroit program was an experiment from the start,” Serbinski said in an interview Wednesday. “The experiment was could TechStars run an accelerator with multiple corporate partners in an emerging market that had a lot of potential, but a significant amount of unknowns? Over the last five years, it became clear that you can work with multiple corporates, you can be in a hyper competitive auto industry, Detroit has momentum and Silicon Valley isn’t waiting anymore. A lot of that proved out.”

Serbinksi’s portfolio is diverse and global. For instance, the startups in the portfolio are from 11 different countries and 40% have female founders. Of the 54 startups Techstars Detroit invested in, just one is from Detroit and two are from Michigan. Serbinksi added that he was not tied to a single thesis that “autonomy is going to take over today” and instead focused on what would work “today and tomorrow.” In other words, he didn’t heavily weight the portfolio with startups focused autonomous vehicle technology, which could take 10 to 15 years to turn into a product.

The portfolio has had success with less than 10% of startups shutting down. Some of the successful accelerator graduates include Cargo, Acerta and Wise.

In 2019, Serbinski announced the name was changing to Techstars Detroit to diversify even more. The new broader aim was to look for startups “transforming the intersection of the physical and digital worlds that can leverage the strengths of Detroit to succeed.” It could be more than just mobility.

“The word mobility was becoming too limiting,” Serbinksi wrote in a blog post at the time. “We knew we needed to reach a broader audience of entrepreneurs who may not label themselves as mobility but are great candidates for the program.”

Even as the accelerator diversified, Serbinski said, it was becoming more difficult to attract investments from the automotive industry.

“We were talking to a healthy amount of new partners for this year and all of those conversations went to zero,” he said. “I’m seeing a tightening of innovation budgets around automotive and mobility  because we’re entering that trough of disillusionment for autonomy. And so, with less accessible money, it made it a lot harder for us to fill in that gap.”



from Amazon – TechCrunch https://ift.tt/2SfheuD
via IFTTT

Uses This

A little interview with me over on Uses This. I'll skip the intro since you know who I am, but I'll republish the rest here.

What hardware do you use?

I'm a fairly cliché Mac guy. After my first Commodore 64 (and then 128), the only computers I've ever had have been from Apple. I'm a longtime loyalist in that way and I don't regret a second of it. I use the 2018 MacBook Pro tricked out as much as they would sell it to me. It's the main tool for my job, so I've always made sure I had the best equipment I can. A heaping helping of luck and privilege have baked themselves into moderate success for me such that I can afford that.

At the office, I plug it into two of those LG UltraFine 4k monitors, a Microsoft Ergonomic Keyboard, and a Logitech MX Master mouse. I plug in some Audioengine A2s for speakers. Between all those extras, the desk is more cluttered in wires than I would like and I look forward to an actually wireless future.

I'm only at the office say 60% of the time and aside from that just use the MacBook Pro as it is. I'm probably a more efficient coder at the office, but my work is a lot of email and editing and social media and planning and such that is equally efficient away from the fancy office setup.

And what software?

  • Notion for tons of stuff. Project planning. Meeting notes. Documentation. Public documents.
  • Things for personal TODO lists.
  • BusyCal for calendaring.
  • 1Password for password, credit cards, and other secure documents and notes.
  • Slack for team and community chat.
  • WhatsApp for family chat.
  • Zoom for business face-to-face chat and group podcasting.
  • Audio Hijack for locally recording podcasts.
  • FaceTime for family face to face chat.
  • ScreenFlow for big long-form screen recordings.
  • Kap for small short-form screen recordings.
  • CleanMyMac for tidying up.
  • Local for local WordPress development.
  • VS Code for writing code.
  • TablePlus for dealing with databases.
  • Tower for Git.
  • iTerm for command line work.
  • Figma for design.
  • Mailplane to have a tabbed in-dock closable Gmail app.
  • Bear for notes and Markdown writing.

What would be your dream setup?

I'd happily upgrade to a tricked out 16" MacBook Pro. If I'm just throwing money at things I'd also happily take Apple's Pro Display XDR, but the price on those is a little frightening. I already have it pretty good, so I don't do a ton of dreaming about what could be better.

The post Uses This appeared first on CSS-Tricks.



from CSS-Tricks https://ift.tt/2tcGvgr
via IFTTT

Free Website Builder + Free CRM + Free Live Chat = Bitrix24

(This is a sponsored post.)

You may know Bitrix24 as the world’s most popular free CRM and sales management system, used by over 6 million businesses. But the free website builder available inside Bitrix24 is worthy of your attention, too.

Why do I need another free website/landing page builder?

There are many ways to create free websites — Wix, Squarepage, WordPress, etc. And if you need a blog — Medium, Tumblr and others are at your disposal. Bitrix24 is geared toward businesses that need websites to generate leads, sell online, issue invoices or accept payments. And there’s a world of difference between regular website builders and the ones that are designed with specific business needs in mind.

What does a good business website builder do? First, it creates websites that engage visitors so that they start interacting. This is done with the help of tools like website live chat, contact form or a call back request widget. Second, it comes with a landing page designer, because business websites are all about conversion rates, and increasing conversion rates requires endless tweaking and repeated testing. Third, integration between a website and a CRM system is crucial. It’s difficult to attract traffic to websites and advertising expensive. So, it makes sense that every prospect from the website as logged into CRM automatically and that you sell your goods and services to clients not only once but on a regular basis. This is why Bitrix24 comes with email and SMS marketing and advertising ROI calculator.

Another critical requirement for many business websites is ability to accept payments online and function as an ecommerce store, with order processing and inventory management. Bitrix24 does that too. Importantly, unlike other ecommerce platforms, Bitrix24 doesn’t charge any transaction fees or come with sales volume limits.

What else does Bitrix24 offer free of charge?

The only practical limit of the free plan is 12 users inside the account. You can use your own domain free of charge, the bandwidth is free and unlimited and there’s only a technical limit on the number of free pages allowed (around 100) in order to prevent misusing Bitrix24 for SEO-spam pages. In addition to offering free cloud service, Bitrix24 has on-premise editions with open source code access that can be purchased. This means that you can migrate your cloud Bitrix24 account to your own server at any moment, if necessary.

To register your free Bitrix24 account, simply click here. And if you have a public Facebook or Twitter profile and share this post, you’ll be automatically entered into a contest, in which the winner gets a 24-month subscription for the Bitrix24 Professional plan ($3,336 value).

Direct Link to ArticlePermalink

The post Free Website Builder + Free CRM + Free Live Chat = Bitrix24 appeared first on CSS-Tricks.



from CSS-Tricks https://synd.co/2TTFB3p
via IFTTT

Wednesday, January 29, 2020

Showers Early today!



With a high of F and a low of 18F. Currently, it's 32F and Cloudy outside.

Current wind speeds: 5 from the Southwest

Pollen: 0

Sunrise: January 29, 2020 at 08:01PM

Sunset: January 30, 2020 at 06:08AM

UV index: 0

Humidity: 82%

via https://ift.tt/2livfew

January 30, 2020 at 10:00AM

How Do You Do max-font-size in CSS?

CSS doesn't have max-font-size, so if we need something that does something along those lines, we have to get tricky.

Why would you need it at all? Well, font-size itself can be set in dynamic ways. For example, font-size: 10vw;. That's using "viewport units" to size the type, which will get larger and smaller with the size of the browser window. If we had max-font-size, we could limit how big it gets (similarly the other direction with min-font-size).

One solution is to use a media query at a certain screen size breakpoint that sets the font size in a non-relative unit.

body {
  font-size: 3vw;
}
@media screen and (min-width: 1600px) {
  body {
     font-size: 30px;
  }
}

There is a concept dubbed CSS locks that gets fancier here, slowly scaling a value between a minimum and maximum. We've covered that. It can be like...

body {
  font-size: 16px;
}
@media screen and (min-width: 320px) {
  body {
    font-size: calc(16px + 6 * ((100vw - 320px) / 680));
  }
}
@media screen and (min-width: 1000px) {
  body {
    font-size: 22px;
  }
}

We've also covered how it's gotten (or will get) a lot simpler.

There is a max() function in CSS, so our example above becomes a one-liner:

font-size: max(30vw, 30px);

Or double it up with a min and max:

font-size: min(max(16px, 4vw), 22px);

Which is identical to:

font-size: clamp(16px, 4vw, 22px);

Browser compatibility for these functions is pretty sparse as I'm writing this, but Chrome currently has it. It will get there, but look at the first option in this article if you need it right now.

Now that we have these functions, it seems unlikely to me we'll ever get min-font-size and max-font-size in CSS, since the functions are almost more clear as-is.

The post How Do You Do max-font-size in CSS? appeared first on CSS-Tricks.



from CSS-Tricks https://ift.tt/31cFFNi
via IFTTT

Resizing Values in Steps in CSS

There actually is a steps() function in CSS, but it's only used for animation. You can't, for example, tell an element it's allowed to grow in height but only in steps of 10px. Maybe someday? I dunno. There would have to be some pretty clear use cases that something like background-repeat: space || round; doesn't already handle.

Another way to handle steps would be sequential media queries.

@media (max-width: 1500px) { body { font-size: 30px; }}
@media (max-width: 1400px) { body { font-size: 28px; }}
@media (max-width: 1300px) { body { font-size: 26px; }}
@media (max-width: 1200px) { body { font-size: 24px; }}
@media (max-width: 1000px) { body { font-size: 22px; }}
@media (max-width: 900px) { body { font-size: 20px; }}
@media (max-width: 800px) { body { font-size: 18px; }}
@media (max-width: 700px) { body { font-size: 16px; }}
@media (max-width: 600px) { body { font-size: 14px; }}
@media (max-width: 500px) { body { font-size: 12px; }}
@media (max-width: 400px) { body { font-size: 10px; }}
@media (max-width: 300px) { body { font-size: 8px; }}

That's just weird, and you'd probably want to use fluid typography, but the point here is resizing in steps and not just fluidity.

I came across another way to handle steps in a StackOverflow answer from John Henkel a while ago. (I was informed Star Simpson also called it out.) It's a ridiculous hack and you should never use it. But it's a CSS trick so I'm contractually obliged to share it.

The calc function uses double precision float. Therefore it exhibits a step function near 1e18... This will snap to values 0px, 1024px, 2048px, etc.

calc(6e18px + 100vw - 6e18px);

That's pretty wacky. It's a weird "implementation detail" that hasn't been specced, so you'll only see it in Chrome and Safari.

You can fiddle with that calculation and apply the value to whatever you want. Here's me tuning it down quite a bit and applying it to font-size instead.

Try resizing that to see the stepping behavior (in Chrome or Safari).

The post Resizing Values in Steps in CSS appeared first on CSS-Tricks.



from CSS-Tricks https://ift.tt/37DWiDX
via IFTTT

Four Layouts for the Price of One

Pretty notable when a tweet about a flexbox layouts gets 8K+ likes on Twitter!

That's "native" CSS nesting in use there as well, assuming we get that at some point and the syntax holds.

There was some feedback that the code is inscrutable. I don't really think so, to me it says:

  • All these inputs are allowed both to shrink and grow
  • There is even spacing around all of it
  • The email input should be three times bigger than the others
  • If it needs to wrap, fine, wrap.

A great use case for flexbox, which is the right layout mechanism when you aren't trying to be super precise about the size of everything.

There is a blog post (no byline 🤷‍♂️) with a more longwinded explanation.


This reminds me a lot of Tim Van Damme's Adaptive Photo Layout where photos lay themselves out with flexbox. They don't entirely keep their aspect ratios, but they mostly do, thanks to literally the flexibility of flexbox.

Here's a fun fork of the original.

It's like a zillion layouts for the price of one, and just a few lines of code to boot.

The post Four Layouts for the Price of One appeared first on CSS-Tricks.



from CSS-Tricks https://ift.tt/38SMpCG
via IFTTT

Worst Thing That Could Happen

Before I install any patch, I always open the patch notes and Ctrl-F for 'supervolcano', 'seagull', and 'garbage disposal', just to be safe.

from xkcd.com https://xkcd.com/2261/
via IFTTT

Practice GraphQL Queries With the State of JavaScript API

Learning how to build GraphQL APIs can be quite challenging. But you can learn how to use GraphQL APIs in 10 minutes! And it so happens I've got the perfect API for that: the brand new, fresh-of-the-VS-Code State of JavaScript GraphQL API.

The State of JavaScript survey is an annual survey of the JavaScript landscape. We've been doing it for four years now, and the most recent edition reached over 20,000 developers.

We've always relied on Gatsby to build our showcase site, but until this year, we were feeding our data to Gatsby in the form of static YAML files generated through some kind of arcane magic known to mere mortals as "ElasticSearch."

But since Gatsby poops out all the data sources it eats as GraphQL anyway, we though we might as well skip the middleman and feed it GraphQL directly! Yes I know, this metaphor is getting grosser by the second and I already regret it. My point is: we built an internal GraphQL API for our data, and now we're making it available to everybody so that you too can easily exploit out dataset!

"But wait," you say. "I've spent all my life studying the blade which has left me no time to learn GraphQL!" Not to worry: that's where this article comes in.

What is GraphQL?

At its core, GraphQL is a syntax that lets you specify what data you want to receive from an API. Note that I said API, and not database. Unlike SQL, a GraphQL query does not go to your database directly but to your GraphQL API endpoint which, in turn, can connect to a database or any other data source.

The big advantage of GraphQL over older paradigms like REST is that it lets you ask for what you want. For example:

query {
  user(id: "foo123") {
    name
  }
}

Would get you a user object with a single name field. Also need the email? Just do:

query {
  user(id: "foo123") {
    name
    email
  }
}

As you can see, the user field in this example supports an id argument. And now we come to the coolest feature of GraphQL, nesting:

query {
  user(id: "foo123") {
    name
    email
    posts { 
      title
      body
    }
  }
}

Here, we're saying that we want to find the user's posts, and load their title and body. The nice thing about GraphQL is that our API layer can do the work of figuring out how to fetch that extra information in that specific format since we're not talking to the database directly, even if it's not stored in a nested format inside our actual database.

Sebastian Scholl does a wonderful job explaining GraphQL as if you were meeting it for the first time at a cocktail mixer.

Introducing GraphiQL

Building our first query with GraphiQL, the IDE for GraphQL

GraphiQL (note the "i" in there) is the most common GraphQL IDE out there, and it's the tool we'll use to explore the State of JavaScript API. You can launch it right now at graphiql.stateofjs.com and it'll automatically connect to our endpoint (which is api.stateofjs.com/graphql). The UI consists of three main elements: the Explorer panel, the Query Builder and the Results panels. We'll later add the Docs panels to that but let's keep it simple for now.

The Explorer tab is part of a turbo-boosted version of GraphiQL developed and maintained by OneGraph. Much thanks to them for helping us integrate it. Be sure to check out their example repo if you want to deploy your own GraphiQL instance.

Don't worry, I'm not going to make you write any code just yet. Instead, let's start from an existing GraphQL query, such as the one corresponding to developer experience with React over the past four years.

Remember how I said we were using GraphQL internally to build our site? Not only are we exposing the API, we're also exposing the queries themselves. Click the little "Export" button, copy the query in the "GraphQL" tab, paste it inside GraphiQL's query builder window, and click the "Play" button.

Source URL
The GraphQL tab in the modal that triggers when clicking Export.

If everything went according to plan, you should see your data appear in the Results panel. Let's take a moment to analyze the query.

query react_experienceQuery {
  survey(survey: js) {
    tool(id: react) {
      id
      entity {
        homepage
        name
        github {
          url
        }
      }
      experience {
        allYears {
          year
          total
          completion {
            count
            percentage
          }
          awarenessInterestSatisfaction {
            awareness
            interest
            satisfaction
          }
          buckets {
            id
            count
            percentage
          }
        }
      }
    }
  }
}

First comes the query keyword which defines the start of our GraphQL query, along with the query's name, react_experienceQuery. Query names are optional in GraphQL, but can be useful for debugging purposes.

We then have our first field, survey, which takes a survey argument. (We also have a State of CSS survey so we needed to specify the survey in question.) We then have a tool field which takes an id argument. And everything after that is related to the API results for that specific tool. entity gives you information on the specific tool selected (e.g. React) while experience contains the actual statistical data.

Now, rather than keep going through all those fields here, I'm going to teach you a little trick: Command + click (or Control + click) any of those fields inside GraphiQL, and it will bring up the Docs panel. Congrats, you've just witnessed another one of GraphQL's nifty tricks, self-documentation! You can write documentation directly into your API definition and GraphiQL will in turn make it available to end users.

Changing variables

Let's tweak things a bit: in the Query Builder, replace "react" with "vuejs" and you should notice another cool GraphQL thing: auto-completion. This is quite helpful to avoid making mistakes or to save time! Press "Play" again and you'll get the same data, but for Vue this time.

Using the Explorer

We'll now unlock one more GraphQL power tool: the Explorer. The Explorer is basically a tree of your entire API that not only lets you visualize its structure, but also build queries without writing a single line of code! So, let's try recreating our React query using the explorer this time.

First, let's open a new browser tab and load graphiql.stateofjs.com in it to start fresh. Click the survey node in the Explorer, and under it the tool node, click "Play." The tool's id field should be automatically added to the results and — by the way — this is a good time to change the default argument value ("typescript") to "react."

Next, let's keep drilling down. If you add entity without any subfields, you should see a little squiggly red line underneath it letting you know you need to also specify at least one or more subfields. So, let's add id, name and homepage at a minimum. Another useful trick: you can automatically tell GraphiQL to add all of a field's subfields by option+control-clicking it in the Explorer.

Next up is experience. Keep adding fields and subfields until you get something that approaches the query you initially copied from the State of JavaScript site. Any item you select is instantly reflected inside the Query Builder panel. There you go, you just wrote your first GraphQL query!

Filtering data

You might have noticed a purple filters item under experience. This is actually they key reason why you'd want to use our GraphQL API as opposed to simply browsing our site: any aggregation provided by the API can be filtered by a number of factors, such as the respondent's gender, company size, salary, or country.

Expand filters and select companySize and then eq and range_more_than_1000. You've just calculated React's popularity among large companies! Select range_1 instead and you can now compare it with the same datapoint among freelancers and independent developers.

It's important to note that GraphQL only defines very low-level primitives, such as fields and arguments, so these eq, in, nin, etc., filters are not part of GraphQL itself, but simply arguments we've defined ourselves when setting up the API. This can be a lot of work at first, but it does give you total control over how clients can query your API.

Conclusion

Hopefully you've seen that querying a GraphQL API isn't that big a deal, especially with awesome tools like GraphiQL to help you do it. Now sure, actually integrating GraphQL data into a real-world app is another matter, but this is mostly due to the complexity of handling data transfers between client and server. The GraphQL part itself is actually quite easy!

Whether you're hoping to get started with GraphQL or just learn enough to query our data and come up with some amazing new insights, I hope this guide will have proven useful!

And if you're interested in taking part in our next survey (which should be the State of CSS 2020) then be sure to sign up for our mailing list so you can be notified when we launch it.

State of JavaScript API Reference

You can find more info about the API (including links to the actual endpoint and the GitHub repo) at api.stateofjs.com.

Here's a quick glossary of the terms used inside the State of JS API.

Top-Level Fields

  • Demographics: Regroups all demographics info such as gender, company size, salary, etc.
  • Entity: Gives access to more info about a specific "entity" (library, framework, programming language, etc.).
  • Feature: Usage data for a specific JavaScript or CSS feature.
  • Features: Same, but across a range of features.
  • Matrices: Gives access to the data used to populate our cross-referential heatmaps.
  • Opinion: Opinion data for a specific question (e.g. "Do you think JavaScript is moving in the right direction?").
  • OtherTools: Data for the "other tools" section (text editors, browsers, bundlers, etc.).
  • Resources: Data for the "resources" section (sites, blogs, podcasts, etc.).
  • Tool: Experience data for a specific tool (library, framework, etc.).
  • Tools: Same, but across a range of tools.
  • ToolsRankings: Rankings (awareness, interest, satisfaction) across a range of tools.

Common Fields

  • Completion: Which proportion of survey respondents answered any given question.
  • Buckets: The array containing the actual data.
  • Year/allYears: Whether to get the data for a specific survey year; or an array containing all years.

The post Practice GraphQL Queries With the State of JavaScript API appeared first on CSS-Tricks.



from CSS-Tricks https://ift.tt/2S05TOQ
via IFTTT

Apollo GraphQL without JavaScript

It's cool to see progressive enhancement being done even while using the fanciest of the fancy front-end technologies.

This is a button in a JSX React component that has a click handler applied directly to it that fires a data mutation Ajax request through Apollo GraphQL. That is about the least friendly environment for progressive enhancement I can imagine.

Hugo Giraudel writes that they do server-side rendering already, so the next tricky part is the click handler. Without JavaScript, the only mechanism we have for posting data is a <form>, so that's what they do. It submits to the /graphql endpoint with the data it needs to perform the mutation via hidden inputs, plus additional data on where to redirect upon success or failure.

Pretty neat.

Direct Link to ArticlePermalink

The post Apollo GraphQL without JavaScript appeared first on CSS-Tricks.



from CSS-Tricks https://ift.tt/2G7o0wE
via IFTTT

Tuesday, January 28, 2020

Mostly Clear today!



With a high of F and a low of 19F. Currently, it's 22F and Clear outside.

Current wind speeds: 5 from the West

Pollen: 0

Sunrise: January 28, 2020 at 08:02PM

Sunset: January 29, 2020 at 06:07AM

UV index: 0

Humidity: 74%

via https://ift.tt/2livfew

January 29, 2020 at 10:00AM

Five reasons you (really) don’t want to miss TechCrunch’s AI and Robotics show on March 3

TechCrunch’s fourth Robotics and AI show is coming up on March 3 at UC Berkeley’s Zellerbach Hall. If past experience is any guide, the show is sure to draw a big crowd (cheap student rates here!) but there’s still time to grab a pass. If you’re wondering why you want to take a day out to catch a full day of interviews and audience Q&A with the world’s top robotics and AI experts, read on.  

It’s the software / AI,  stupid. So said (in so many words) the legendary surgical robotics founder Dr. Frederic Moll at Disrupt SF last year. And this year’s agenda captures that reality from many angles. UC Berkeley’s Stuart Russell will discuss his provocative book on AI – Human Compatible, and the deeply important topic of AI ‘explainability’ will be front and center with SRI’s Karen Myers, Fiddler Labs’ Krishna Gade and UC Berkeley’s Trevor Darrell. Then there is the business of developing and sustaining robots, whether at startups, which is where Freedom Robotics’ Joshua Wilson comes in, or at large enterprises, with Vicarious’ D. Scott Phoenix

Robotics founders have more fun. That’s why we have a panel of the three top founders in agricultural robotics as well as another three on construction robotics and two on human assistive robotics, plus a pitch competition featuring five additional founders, each carefully chosen from a large pool of applicants. We’ll also bring a few of those founders back for a separate audience Q&A. Meet tomorrow’s big names in robotics today!

Big companies do robots too. No one knows that better Amazon’s top roboticist, Tye Brady, who already presides over 100,000 warehouse robots. The editors are eager to hear what’s next in Amazon’s ambitious automation plans. Toyota’s robotics’ focus is mobility,  and Toyota Research Institute’s TRI-AD CEO James Kuffner and TRI VP of Robotics Max Bajracharya will discuss projects they plan to roll out at the Tokyo Olympics.  And if that’s not enough, Maxar Technologies’ Lucy Condakchian will show off Maxar’s robotic arm that will travel to Mars aboard the fifth Mars Rover mission later this year. 

Robotics VCs are chill (once you get to know them). We will have three check writers on stage for the big talk about where they see the best investments emerging –  Eric Migicovsky (Y Combinator), Kelly Chen (DCVC) and Dror Berman (Innovation Endeavors) plus two separate audience Q&A sessions, one with notable robotics / AI VCs, Rob Coneybeer (Shasta) and  Aaron Jacobson (NEA) and a second with corporate VCs Quinn Li (Qualcomm) and
Kass Dawson (Softbank).

Network, recruit, repeat. Last year there were 1500 attendees at this show, and they were the cream of the robotics world – founders, investors, technologists, executives and engineering students. Expect nothing less this year. TechCrunch’s CrunchMatch mobile app makes meeting folks super easy, plus the event is in UC Berkeley’s Zellerbach Hall – a sunny happy place that naturally spins up great conversations. Don’t miss out.

Our Early Bird Ticket sale ends this Friday – book your tickets today and save $150 before prices increase. Students can book a super-discounted ticket for just $50 right here.



from Amazon – TechCrunch https://ift.tt/3aJ7bpQ
via IFTTT

Mostly Clear today!

With a high of F and a low of 15F. Currently, it's 14F and Clear outside. Current wind speeds: 13 from the Southwest Pollen: 0 S...