This blog you’re reading right now exists since March 7, 2018. It is a hosted microblog on the micro.blog service run by Manton Reece. The service is a hybrid between blog (and podcast) hosting and a social network with a timeline. It launched on Kickstarter in January 2017 and opened doors later the same year. I supported the campaign with backer number #592.
I’ve blogged a bit, but I’m not a super active community member. Still, I enjoy listening to Micro Monday, the weekly podcast introducing people who blog on the site. Catching up on the two latest episodes this morning motivated me to write a bit about the history of my (micro)blog.
In my time online, I used to have a variety of different personal websites and blogs. Somehow I didn’t stick with most of it but started over a few times. Then, in 2012, a service called app.net was launched. It was what you could call a headless social network. The idea was that you had a centralized social graph and data storage, but you could use all sorts of apps and services to access it. It was an answer to the tendency of other social networks like Twitter restricting their APIs and driving people to their official apps. At the same time, I followed the IndieWeb movement, the idea of owning your content and primarily making it available on a domain name you control while also integrating with existing social networks. Eventually, I married both approaches and built an open-source software called phpADNSite. With phpADNSite, your content and interactions lived on app.net, but you could present them on your domain through a custom template. Your domain also connected app.net with the IndieWeb.
Unfortunately, app.net stopped further development in 2014. There was still an engaged community at the time trying to support the platform under the “ADNFuture” banner, but it didn’t help. In March 2017, the platform shut down for good. Luckily, I had already considered this scenario when building phpADNSite by implementing a backup feature that served my old app.net content as a static website after the shutdown. It just didn’t allow me to create and share anymore. So, for a while, I couldn’t publish new content.
Since I still liked the general idea of separating data storage and presentation, I considered a variety of different hosted DBaaS (database-as-a-service) or headless CMS (content management systems) as a replacement. Also, instead of a full application like phpADNSite, it could be served by a FaaS (function-as-a-service) serverless offering. In my mind, I dubbed this “cloud-native IndieWeb”. However, I couldn’t decide on one specific approach. I wanted to experiment with multiple, but I didn’t have the time. That’s when I concluded that, even though “selfdogfooding” is a central idea of the IndieWeb community, it didn’t make sense to have an outlet for writing the same place in which I would do coding experiments, as it made both activities dependent on each other.
One of the reasons why I signed up for the micro.blog crowdfunding in the first place was its unique, hybrid approach. It reminded me of my own. At the time of backing, I had no idea how I would use it. But eventually, I decided having a hosted blog on a service roughly following my ideals is a great approach. I don’t need to host my own and can still retain some control through my domain name.
I hope you enjoyed this little backstory of my blog, and I sincerely hope that I will find some more time to experiment more with IndieWeb technologies and the “cloud-native IndieWeb” approach.
Recently I heard a lot about a new software called Roam Research. According to its website, it is “a note-taking tool for networked thought”. Especially Anne-Laure Le Cunff of NessLabs seemed to be full of praise for the application. I still remember when Evernote launched and was described as “an extension of your brain”. But Roam seems to be the one fulfilling that promise because its structure is much more like a brain. I’ve used the tool for roughly two weeks now and wanted to write a summary of my experience and why and how I use it.
Generally, I do quite a bit of reading online, and I collect information that feels important to me from the articles I read, mostly by copying verbatim quotes. I used to copy those to Evernote, where I had notes for different topics in which I would collect those quotes and their source URLs. Titles of such notes could be something like “API Design”, “Developer Experience”, “Digital Transformation”, or “Climate Change”. And this is where the problems start. For example, what about an article that covers the impact of digital transformation on climate change? It should go in both notes. Instead, I could create a note for every external piece, of course, but then the only way to connect the thoughts would be to make extensive use of tagging, which I don’t use a lot in Evernote.
Roam is a web-based combination of a wiki and an outliner. Even though you also create notes or pages, Roam makes it very easy to link different pages together, inline using hashtags (#) or double brackets ([[ ]]). Every page is a hierarchical list of hypertext paragraphs, and you can link from different hierarchy levels. The application also shows you when you have used a term for which a page exists but not linked, so you can decide whether you want to connect the thoughts or not. It can also visualize your whole database as a graph. In Roam, it is not a problem to add new articles you read as a page on their own and then establish links to the other material you have read, which makes the whole thing more comfortable and more rewarding.
I have a wide array of interests. Even my primary professional area has many interconnected aspects if you look at an API lifecycle and all the factors of an API - design, implementation, security, etc. - and then look at developer experience and developer relations, which involve, for example, technical writing. Then, there are many other areas of interest from my, such as self-development, future of work, basic income, effective altruism, and environmental issues. I don’t see different interests as separated domains but rather as various aspects of a whole that can influence each other, and where unusual connections can appear.
There are links, for example, between the API economy and the future of work. However, the picture in my mind still feels incomplete, and I lack the language to describe how it all fits together and what it means. I will continue and try organizing my thoughts in Roam, and I’m confident it will help me complete my mental model.
If there’s anything negative I can say about Roam is that it’s quite new, so it’s not sure how it will develop. It doesn’t have an API (or integrations) of its own yet, something I believe is a minimum requirement for any SaaS product launching today. Still, you can import and export data. Also, it’s free to use with no pricing or published business model yet. I assume it will be a moderate monthly subscription, but it would be nice to know for sure.
Have you tried Roam already, and do you have any tips for me to make the most of it? Please let me know what you think! Thank you!
Security is an essential aspect of API design and implementation. And while implementing proper security measures can be hard, sometimes it’s the most basic stuff that goes wrong. The most recent APIsecurity.io newsletter was a good reminder of that.
A WordPress plugin, RankMath, introduced an API endpoint into a WordPress instance. And it added this endpoint without any authentication or authorization checks, leaving it open for the world. There are very few cases where an API can deliberately omit authentication for anonymous access, for example, when you provide access to data that is public anyway. But the default approach should always be to implement authentication and test that the endpoint rejects all unauthorized requests.
Another, even more fundamental problem, occurred with the Tapplock smart lock. The IoT device used unencrypted HTTP to communicate with its server. Nobody should use unencrypted HTTP anymore, and most definitely not for APIs.
The newsletter also mentioned “broken object-level authorization” vulnerabilities in both Tapplock and another smart device, TicTocTrack. These so-called BOLA problems occur when there is proper authentication in place, but the code doesn’t check authorization for every object. It is a hard problem, and it cannot be solved in API design or with OpenAPI descriptions, but your implementation code must prevent this. Once again, testing is your friend, and tests should not cover success cases but also those you want to fail, to make sure they actually fail.
At the very least, however, make sure you have authentication in place (you can specify that in OpenAPI) and always use HTTPS!
Last night I took part in the first virtual API the Docs edition, where I listened to two great talks, one by Leah R. Tucker of {your}APIs and one by Kristof van Tomme of Pronovix. The event took place via GoToMeeting, with discussions happening in parallel on Slack. There was also an unconference part with breakout sessions happening via Google Meet, but unfortunately, I had to leave after the talks so I couldn’t join them.
Leah talked about Designing a future-proof API program. She drew parallels between supply chains and large numbers of APIs in an organization, emphasizing the need for consistency in APIs. I liked how she approached it not just from a perspective of developer experience but also a more general brand experience. That might be the right way of putting it to get buy-in from non-technical management to invest in API design and build up a data steward team.
Kristof talked about Beyond API Spray & Pray - Devportals in Digital Transformation. He described two trends of digital transformation; the redefinition of closeness by replacing physical proximity with digital proximity. The second is market complexity, for which he referred to the Cynefin framework. APIs and developer portals can help in achieving transformation. Kristof also gave an overview of different types of developer portals and the role they play.
I enjoyed both talks and the Q&A that followed them. If you’re curious about the next event, you can register on the Eventbrite page and also join the new API the Docs Slack workspace.
Meetups, events, and conferences remain canceled. That affects API the Docs as well. Just a bit over a month ago, I wrote that I am volunteering on the speaker selection committee for their Portland conference and that I’m looking forward to attending the European editions in Cologne-Bonn and Brussels later this year. Portland is not happening, and neither is Cologne-Bonn. So far, Toronto in September and Brussels in November is still on, but it remains unclear how the global crisis unfolds. I hope that politicians lift strict lockdown measures or contact restrictions soon (maybe when we have enough face masks and privacy-friendly contract tracing apps). Still, I also feel that international conferences may not happen for an entire year. Once the series starts again, I’m happy to get back on speaker selection duty.
While the speaker committee has dissolved, the speakers still have an opportunity to give their presentations, just in a different format. Instead of an all-day conference, there will be bi-weekly smaller virtual API the Docs events with two talks each. The first event is on Wednesday, April 8th. However, at the time of writing, it is already at maximum capacity. Make sure you register for an upcoming event on the Eventbrite page and also join the new API the Docs Slack workspace where the social part of the events takes place and where you can learn more about the virtual API the Docs.
The new coronavirus is slowing down public life and the economy. At the same time, however, I am observing the public discussion expand, especially on Twitter, around two topics that I am very interested in, Remote Work and Universal Basic Income (UBI).
For us lucky knowledge workers who just need a computer and an Internet connection to get work done, remote work was always an option but its global impact was limited. For every successful distributed company, there’s another one believing in “butts in seats”. That may change as at least a fraction of the people who work remotely for the first time may find it works well for them and their employers or clients. They may use this option much more in the future, with all the benefits (i.e., fewer carbon emissions from commuting) that come with it.
On the other hand, there are and always will be people who get work done with their hands and bodies out in the real world. Some of them have to continue working, but others won’t. Direct support to their employers or a reduced tax burden does not reach all of them, especially self-employed workers in the “gig economy”. Handing out cash, on the other hand, does help everyone and may be a stimulus for the economy hit by the coronavirus. It is the right time to give a temporary UBI a try or at least some one-time cash transfers to collect more data points to prove that they work.
Along with my professional interests centered around APIs and developer experience, I have always been curious about the future of work. Every software developer and other person working in IT is in some way (maybe unconsciously) building that future. I believe that the API economy is one of the cornerstones of a world that Pieter Levels described as billions of self-employed makers and a few mega-corporations. We already have the latter, but for the former to thrive, we need UBI as a safety net. And they will be working remotely.
If there’s anything good coming from the current crisis, maybe it’s kickstarting the conversations about the essential topics for the future.
The API the Docs conference series is coming back to North America with an edition in Portland on 1st May, 2020, and I’m happy to make an announcement: Along with Laura Vass, Leona Campbell, and Yuki Zalkov, I’ll be part of the speaker selection committee.
The call-for-proposals (CFP) is still open until 29th February, after which the committee will review the submitted talks and choose the ones which we feel are most interesting and valuable to the community of API practitioners.
I’ve supported API the Docs in the past by being a part of the DevPortal Awards jury in the last two years, and this year I’m excited to volunteer for the community in a different role.
While I won’t be attending the Portland conference myself, I’m looking forward to meeting you at the two European editions in Cologne-Bonn and Brussels later this year.
Let’s indulge a bit in nostalgia this weekend. I just remembered one of the websites that I used to frequent a lot around 15-20 years ago. The site was called klamm.de, and it was a German paid portal site. Or should I say, it is because if you follow the link, you might see that the site still exists. It almost looks like and has the features that were developed in its early years after the inauguration in 1999.
At the time, “getting paid for looking at ads” was the latest fad with paid email promotions, reviews, and even “surf bars” which would continuously show rotating banner ads next to your browser. All with sophisticated multi-level affiliate programs to make sure you’d invite your friends. And late teenage me was much more curious about the ideas and making some money (though I never made anything substantial) rather than being critical about advertising and the privacy-invading technology behind it as I am today.
Anyway, klamm.de was less about the earnings, but more about the community - the so-called “klammunity” and I spent quite a bit of my time on the forums of the site. Also, I assume that the site was responsible for my interest in APIs that drives my work today. How so?
At one time, klamm.de introduced “Lose” (lottery tickets) as its virtual currency, which users could bet to win prizes. At the same time, they could be traded between users. And, to drive this process, site owner Lukas Klamm (with whom I coincidentally share the first name) created an API called ExportForce. And I remember the first thing I did. I took a Javascript-based roulette game that I created as part of my high school computer science class. Then, I hooked it up to the API so that you could win “klamm Lose” playing roulette.
Of course, it was a stupid idea, because the game ran on the client and would report results to the server, so you could easily cheat. Still, it kicked off other hobby developers in the klammunity to build things around the API. And I learned a lot from it, too.
It’s interesting to see some of the “paid4” sites still around, even though earnings are minuscule, and we’re already annoyed by the advertisements we don’t get also paid for. I deleted my klamm.de account after not using it for a few years, but I’d love to log in again and take a trip back in time.
Last week, I’ve published a release of phpMAE and published an announcement post and tutorial on the CloudObjects blog. This week, as a follow-up, I have written a little about the background of the breaking changes in my open-source PHP-based sandbox for serverless/Function-as-a-Service (FaaS) development.
I finally managed to push a new release for phpMAE, my open-source PHP-based sandbox for serverless/Function-as-a-Service (FaaS) development, and (experimental) hosted service, which is a part of CloudObjects. For this release, I’ve updated the Getting Started tutorial and published it on the CloudObjects Blog just now. I’d be happy if you give it a try!
This is my first blog post in 2020, so first of all: Happy New Year! 🎇
The beginning of a new number on the calendar is a good time for some self-reflection. Among other things, I have thought about my relationship with social media again. Just as many others who spend a great deal of time on the Internet, it’s a sort of love-hate relationship. On the one hand, I enjoy the power of social media to connect people. On the other hand, it’s kind of addictive and can lead to mindless scrolling, which can be a huge timesink and makes you feel unhappy.
For Facebook, I’ve reenabled the Disable Facebook News Feed Firefox extension, which I used before but disabled at some point.
For Twitter, I’ve taken a little inspiration from Glitch CEO and blogger Anil Dash, who wrote about cleaning up his Twitter feed for the beginning of the new year. The post is from 2018, but he set off a tweet indicating he did the same thing this year.
I couldn’t convince myself to be as radical as Anil, so I used Tokimeki Unfollow instead. The application is inspired by Marie Kondo and works by showing you each of your followings with their latest tweets one by one and asks whether their latest tweets “still spark joy”. You can then choose to either unfollow or keep them. The process is also comparable to swiping through Tinder and similar apps.
I unfollowed inactive accounts, those whose tweet frequency is too high and those where I can’t remember why I started following them. I kept friends and people I’ve met in person or interacted with lately. It wasn’t a vast purge, but at least I got down from 492 to 302. My Twitter feed feels different and less overwhelming now.
For other networks, I haven’t made any changes.
Today I came across an article by Erik Dietrich called “Learning in a World Where Programming Skills Aren’t That Important”. I haven’t found the time to read Erik’s book Developer Hegemony yet, but I’ve read and enjoyed a lot of the writing on his blog.
Early in the article, he recounts his definition of an efficiencer. The difference between an efficiencer and a programmer is this: the programmer writes code while the efficiencer solves a problem.
A while ago, I wrote a post about a contract I did in which I built API-driven automation on top of Airtable, instead of continuing the custom-built CRM that the previous developer had started to create. While writing that post, I also described that I believe a developer’s job shouldn’t be writing code but solving problems. Erik’s writing partly inspired my reasoning, but at the time, I didn’t have a fancy term for it. Now, however, I believe that the project I mentioned is an excellent example for an efficiencer’s work.
I enjoy coding, and I love writing code that does something smart. I even tend to grow attached to the lines I wrote. But the value I can provide doesn’t necessarily lie in that code but in understanding requirements and solving them in the best way.
I’m happy to announce that I have launched a new profile website for my freelance consulting business. The site centers on developer content production, which I have strategically decided to focus on, although it mentions other services as well. It describes the importance of content for API providers and developer-focused companies and how I intend to help them creating and documenting sample applications for their API in eight steps.
Unlike my blog, which is in English only (some thoughts on this in my last post), the new profile website is available in two languages.
You can find the German version at lukasrosenstock.de and the English version at lukasrosenstock.de/en. Any feedback on the site is always appreciated 😊
Last night I listened to the latest @monday episode in which @macgenie interviewed @ton. It was quite inspirational; I especially liked the idea of his “Birthday Unconference”.
Something else that got me thinking was their discussion at the beginning of the episode about blogging in different languages. Ton primarily writes in English but also sometimes posts in Dutch or German. He used to dabble with separating the languages into different blogs but ultimately decided to put everything in the same feed and tag or categorize the content.
For me, I never liked this idea of having multi-lingual content on the same blog, even when it’s tagged or categorized (inconsistently, I do post in multiple languages on Facebook, though). At the same time, I probably don’t put out enough content to justify multiple websites. I used to have multiple Twitter accounts, but even that was a little cumbersome to manage.
My blog and also my tweets are mainly about the tech industry, especially the narrow API and DevRel niche. My business targets international clients, and I share a lot of external content, which mostly is in English as well. Therefore, I think it makes sense for me to focus all my writing on English language content. On top of that, as I mentioned in my recent post about motivations to blog more, I want to improve my written English. Another important aspect is that the focus on one language avoids the mental load of making the decision which language the next post should have.
On a side note: I do have one German 🇩🇪 social media presence, though, and that is my Innovators Gießen Twitter account, where I share mostly tech content with local relevance to the region where I live.
Right now, I’m sitting on a train en route to Hamburg. A friend and I have tickets for the performance by Ludovico Einaudi in the Elbphilharmonie tonight 🎹. It’s my first time in this new and iconic concert hall, so naturally, I’m excited!
I’m resuming my work next week, so if you’re trying to get in touch, please bear with me, and I will get back to you on Monday.
Recently I’ve encountered the term full-stack freelancer through an article by Tiago Forte on his Praxis blog. I had heard of full-stack developers, but I never heard that term before, so I was intrigued. Tiago defines such a person as someone who has a broad portfolio of different projects and receives multiple income streams that come through varied activities. It’s the opposite of a freelance expert who specializes in a single offering in a specific niche.
I don’t want to go in-depth at the moment regarding the entire concept, but I’d like to highlight one of his thoughts that was a proverbial lightbulb moment for me. After thinking about it, I realized it’s obvious, though I can’t remember someone explicitly stating this thought.
The idea is that certain activities are impossible to focus on as a full-time position or have greatly diminishing returns, but doing them in moderation can be extremely beneficial.
For me, paid guest posts are one such activity. I’ve done quite a few in the past. They have provided me exposure, some money, and the ability to learn a lot, which I could then apply to other gigs, such as software development projects. I mentioned this briefly in my post yesterday about the motivation to write more. However, I could never be a full-time blogger because I would soon run out of ideas and lucrative opportunities to write. It’s valuable to do this infrequently, though.
Tiago mentions other things that he does once in a while, such as coaching and consulting, which are part of his varied portfolio.
For me, this ties into the discussion between generalists and specialists and the hybrid variant, the T-shaped skills. It also adds to the idea of a gig economy as the future of work. Different projects could allow a person to focus on the middle of the T while having occasional contracts that help with the ends of the T, with every client benefitting as a result.
My blog had been on a four-month hiatus during the summer. Since the end of September, I have stepped up the frequency of posting new content. The long break was mostly due to me being unsure about the scope of the blog. At least that’s what I’m telling myself. I’m considering this blog a professional one, an extension of my freelance business, a way to showcase my work and communicate with existing and potential clients. However, there are so many things that I feel like talking about that would be outside that definition. They are still very much part of my personal and entrepreneurial journey because they shape the way I think. So I’ve decided to ponder less about boundaries and try to post more.
The only constraint is that I want to stick to mostly short- and medium-form writing and not make this blog a place for really long-winded thoughts, essays, or tutorial-style posts. Something that I can write in one to a maximum of three Pomodoro sessions. In that way, I can actually ship posts instead of just collecting ideas or working on never-finishing drafts.
Blogging is beneficial even without an audience because writing about something helps with clarifying one’s thoughts. For a non-native speaker like myself, it also helps me to hone my written English skills.
On the other hand, I believe it’s crucial to cultivate a personal brand and define what you stand for and what makes you unique. Blogging more will certainly help me do that.
There are many good examples of people who gained new professional and personal opportunities because of what they published online. Even I have won one paid contract due to a blog post that I wrote. In this specific case, it wasn’t something that I wrote on a personal blog but a guest post. Still, it underlines the importance of putting your name out there. And since you never truly own your representation on someone else’s website, it’s essential to have a place online to call your own. Thank you for coming to this place to read this!
If you want to read more about this subject, I recommend Jamie Tanna’s article “Why I Have a Website and You Should Too”.
Good API design is important! And one of the main aspects of any good design (not just for APIs) is consistency. Developers (or other users) should be able to recognize patterns and not suddenly encounter elements that go against their expectations. The typical approach to enforce consistency between multiple people working on a product or even across teams in an organization is to write down a set of rules - an API style guide.
The problem with rulebooks of any kind, though, is that people don’t like reading them or, even if they do, they cannot remember it all and so they accidentally break the rules. That can be prevented, of course, with automatic validation and linting through tools like Spectral. However, maybe the better approach is not having too many rules to begin with?!
It was the lesson learned by Holger Reinhardt, CTO of Adello. As he wrote on the company’s tech blog, based on his experience of writing an extensive style guide for his previous employer, he tried to limit himself “to the very critical and core aspects of API design”.
The resulting Adello API Styleguide is publicly available. I like how it covers everything that I would also consider essential, provides additional reading material, and references to principles like Postel’s law. However, you can still glance over it in seconds and read the entire thing in minutes. I’m sure that this guide could be a good example that other companies could build their API style guides upon.
By the way: if you think your API design needs improvement and you could use some help, I’m available for API consulting freelance work!
When I recently saw the link to a petition for a Fossil Free Facebook, I assumed it was someone requesting Facebook to switch their datacenters to energy from renewable sources. It reminded me of another petition I wrote about last year, which asked the tech industry, in general, to switch to green power. It seems, however, Facebook has already pledged to do that, which is laudable.
The Fossil Free Facebook campaign, though, goes a step further than that. They target Facebook in their position as one of the largest advertising companies in the world and ask them to reject advertising from oil and gas companies. It’s well known by now that many of the giant energy companies, while investing in renewable energy themselves, also try to influence the public opinion and downplay the risks associated with climate change, or even fund outright denial.
If you believe that companies like Facebook should take more responsibility for the ads that they host, have a look at the petition and sign it if you like.
Zoho is probably one of the most underrated companies in the tech world. They have amassed a broad portfolio of enterprise SaaS products for practically every business case out there. The tools have integrations between them, and almost all have APIs for third-party integrations. And even though they serve 50+ million users (according to their About us page), they somehow fly under the radar. That might have a lot do with the fact that they are bootstrapped without outside capital, so they have no story for tech media that loves to write about VC and funding rounds. Also, their main center of operations is in Chennai, Tamil Nadu, India, and not in a tech hub in the West.
The reason I’m writing about Zoho is that I’ve recently learned they have launched Zoho Catalyst, a serverless platform. At the time of writing, the product is in closed beta. It offers software developers BaaS (Backend-as-a-Service), FaaS (Function-as-a-Service), and even AI (artificial intelligence) APIs along with integration into the existing Zoho stack. But more important, it is, as they’ve put it nicely on the website, “powered by our life’s work”. That means that they have taken infrastructure and internal services that they have built for themselves and opened them up to the outside world.
It could be a smart move, considering that this is something Amazon has done quite successfully. They have turned a bookseller into one of the world’s most massive cloud hosting companies, all through an infamous decision by Jeff Bezos. And it’s another example of a backward vertical integration business strategy that involves moving down a layer in the “stack”. Although Zoho is much smaller, of course, they might benefit from economies of scale and their expertise. I’m looking forward to observing if and how Zoho Catalyst will succeed.
A software developer’s job is to write code. That’s the reason people hire them. I disagree. I believe a developer’s job is to solve a problem. In a world of SaaS, code libraries, APIs, and automation tools, the best way to solve the problem might be to figure out one or more existing solutions and focus on their integration, instead of reinventing the wheel.
Once, a potential client contacted me who had a specific need for marketing automation involving external APIs. Details are confidential, of course, but they don’t matter for the point I want to make. The client further mentioned he was already working on it with another developer. However, they were unhappy with the work they did and thinking of reassigning the project.
After the client shared the current state of the project with me, I realized that the developer had started implementing a specialized CRM (customer relationship management) tool from scratch. It would have taken some time to complete this CRM before even getting to the API-related work. Thus, I made a suggestion.
I proposed to scrap the whole thing, design a CRM with the required fields in Airtable, and have the client and his employees manage everything through their frontend. Managing structured data in tables is a solved problem, and Airtable is a magnificent tool. Then, I would implement a crontab-triggered service that pulls the data from Airtable via the API and then initiates the required requests on the other APIs. The API results would go back into Airtable. This way, I could focus my work on the value-add for his business, instead of writing generic frontend and data management code.
The client happily accepted, I built it, and he’s now using the system daily. Building headless (frontend-less) integrations with APIs and leveraging existing user interfaces wherever possible might not be the natural and intuitive choice. However, I am confident it can quite often be better and faster than writing your own. It can also help to keep a project with time and budget constraints.
While trying to clean out my saved content in Pocket, I came across an article by Evgeny Morozov published in the SZ (Süddeutsche Zeitung) in German with the title “Für eine radikale demokratische Transformation” (UPDATE: “The left needs to get radical on big tech – moderate solutions won’t cut it” in The Guardian seems to be the English version - thank you Sebastian Lasse!).
In the article, Morozov writes about the “techlash” as a growing feeling that a few dominant corporations have too much control over the Internet. And, as he says, there are basically three types of opposition to the status quo, which I found an interesting classification.
The first type of criticism is economism. It’s mainly focused around the idea that corporations profit off personal data and free labor from the consumers. These should be compensated, for example, through a “data dividend”.
The second type of criticism is technocracy. It identifies monopolies as the main issue and calls for antitrust regulations to break them down and establish more competition in the market.
Morozov says that both of these schools of thought are too limited because they only look at the economy. He identifies a third group who is trying to go beyond the status quo and rethink the purpose of technology in the world and look at data as more than just an economic resource. Essentially, this group is trying to envision a utopia with different rules and new players, including non-commercial ones, and more democratic control. So far, however, this group lacks unity and visibility.
I follow a few initiatives that are part of this third group, such as the IndieWeb movement or Mastodon/ActivityPub, as an alternative approach to social networks. The platform cooperatives movement and some Blockchain-based projects also belong there. And so do tiny indie makers, such as this blog’s host micro.blog. The field is incredibly diverse, but that makes it very difficult to navigate and understand. And, in my opinion, they can’t speak with one voice. The only thing that unites them is the idea that there’s something better than the status quo, but the details vary. Therefore, if we want to limit the power of the technology giants, it might be easier to implement the more straightforward suggestions from the other two groups first. Next, the initiatives in the third group can compete or cooperate on a more level playing field.
There’s also a fourth approach that I sympathize with, for obvious reasons. USV VC Albert Wenger postulates the “right to an API key”. The idea is that companies must make their data troves available in a machine-readable format. In this way, individuals can be empowered to leverage that data in a way that benefits them instead of being limited to the algorithms offered to them.
Next week it’s API the Docs time again. The API conference is coming back to Amsterdam and is sold out! I’m a regular attendee of their events and even was a speaker last year in Paris. This time I’m not speaking, but I’m looking forward to connecting with the community. I booked my train tickets and accommodation this morning and I have some time on the days before and after the conference if anyone wants to meet!
There’s one active role I’m playing for the community, though. For the second time after London last year I’m on the jury of the Dev Portal Awards. Last week I spent some time reviewing many great submissions. I also had two calls with fellow jury members Bob Watson and Anne Gentle as well as Katalin Fogas of Pronovix who’s organizing the awards. I can say that it wasn’t always easy, but we have some winners now! They will be announced at the awards gala that’s happening next week as part of API the Docs and then published online - so stay tuned for that!
I just watched the 4-minute video clip from Greta Thunberg and George Monbiot in which they call for more support for natural climate solutions to help tackle climate change.
One of those solutions is, quite obviously, planting more trees 🌳! And although we can only solve the climate crisis with public measures implemented on a global scale, individual actions are still important. Donating to reforestation projects is one of those actions. Fellow API practitioner Phil Sturgeon has beat the drum a lot lately for a site called offset.earth. The site collects donations for Eden Reforestation Projects but adds a community and gamification aspect on top of it, which I really like:
Every member grows their own virtual forest, and they get a profile URL which displays the trees as well as the impact of the carbon offsetting. The profile can also be used to display other personal goals. Have a look at my profile page at offset.earth/lukas, for example. If you want to make a one-time donation (any amount and payable in EUR, GBP or USD), you can do that on my page to help my forest grows.
Or, if you want to donate regularly (it’s GBP 4,50 a month), you can sign up through my special referral link. Then, you will get your personal offset.earth-URL with your own forest and I will get some “sparkly trees” in mine.
After some months of silence, I’m returning to my blog. And I want to talk about one of the defining problems of our times: climate change, and how to avoid disastrous global warming.
Now, I’m interested in this topic because I believe the scientists who say that we have to act now. But I’m by no means an expert. And neither was my friend Mischa Hildebrand, a physicist and radio journalist turned software developer. However, like (hopefully) you and me, he’s a person that cares about the impact of one’s life on the planet and the people around you. Therefore he has taken the time to write a great piece about climate change.
He says that as humankind, we have to limit our use of natural resources by moving away from growth and expansion. Technological progress is significant, but it won’t save us. It’s a long read but worth your time. And, like a good scientist, the author focuses on analyzing the problem with depth and structure, citing numbers and moral principles, instead of jumping to quick conclusions and solutions.
I can’t urge you enough to go and read Mischa’s article “Climate Change is Not Our Problem.We Are.” and share it on your blogs and social media channels.