Welcome to episode 17 of the Steampipe+Mastodon collection, through which we introduce a brand new subplot: timeline historical past. Thus far, the examples I’ve proven and mentioned work with present timelines. We’ve seen SQL queries that fetch outcomes from real-time calls to the Mastodon API, and Steampipe dashboards that show these outcomes. However Steampipe isn’t simply an API siphon, it’s additionally a Postgres database. As such it helps the transient tables created by Steampipe’s overseas information wrapper and plugins, but in addition allows you to create your personal native tables as properly. And you should use these native tables to build up information from the transient overseas tables.
As a result of saving and looking Mastodon information is a controversial matter within the fediverse—none of us needs to recapitulate Massive Social—I’ve centered to this point on queries that discover latest Mastodon stream, of which there are lots extra to jot down. However no person ought to thoughts me remembering my own residence timeline, so a couple of weeks in the past I made a device to learn it hourly and add new toots to a Postgres desk.
Earlier than you’ll be able to add any toots to a desk, after all, you’ve obtained to create that desk. Right here’s how I made this one.
create desk mastodon_home_timeline as choose * from mastodon_toot_home restrict 200
As soon as created, the desk may be up to date with new toots like so.
with information as ( choose account, -- extra -- columns username from mastodon_toot_home restrict 200 ) insert into mastodon_home_timeline ( account, -- extra -- columns username ) choose * from information the place id not in ( choose t.id from mastodon_home_timeline t )
To run that question from a crontab, on a machine the place Steampipe is put in, reserve it as mastodon_home_timeline.sql
, then schedule it.
15 * * * * cd /house/jon/mastodon; steampipe question mastodon_home_timeline.sql
That’s it! Now the quantity reported by choose depend(*) from mastodon_home_timeline
is rising hourly.
I’ve solely been amassing toots for a few weeks, and haven’t but begun to discover that information but; we’ll see what occurs after we get there. In the meantime, although, I wish to present how such exploration is usually a crew train.
A buddy of mine, whom I’ll name Elvis, shares my curiosity in teasing out connections amongst folks, servers, and hashtags. He might seize his personal timeline utilizing the strategy proven right here. However since we’ll be this information collectively, we agreed that I’ll collect each our timelines. To allow that, he shared a (revokable) Mastodon API token that I’ve used to configure Steampipe with credentials for each our accounts.
connection "mastodon_social_jon" { plugin = "mastodon" server = "https://mastodon.social" access_token = "..." } connection "mastodon_social_elvis" { plugin = "mastodon" server = "https://mastodon.social" access_token = "..." }
Steampipe’s overseas information wrapper turns every of those named connections into its personal Postgres schema. Athough we occur to share the identical house server, by the way in which, we needn’t. A crew collaborating like this might pool timelines from mastodon.social and hachyderm.io and fosstodon.org and some other Mastodon-API-compatible server.
(You are able to do the identical factor with AWS or Slack or GitHub or different type of account by defining a number of connections. Steampipe makes API calls concurrently throughout parallel connections.)
With this configuration I can learn my timeline like so.
choose * from mastodon_social_jon.mastodon_toot_home restrict 200
And Elvis’s like so.
choose * from mastodon_social_elvis.mastodon_toot_home restrict 200
If I wish to question each in actual time, for instance to depend the mixed complete, I can use a SQL UNION. Or I can outline an umbrella connection that aggregates these two.
connection "all_mastodon" { plugin = "mastodon" sort = "aggregator" connections = [ "mastodon_social_jon", "mastodon_social_elvis" ] } connection "mastodon_social_jon" { plugin = "mastodon" server = "https://mastodon.social" access_token = "..." } connection "mastodon_social_elvis" { plugin = "mastodon" server = "https://mastodon.social" access_token = "..." }
Now the question choose * from all_mastodon.mastodon_toot_home restrict 200
makes API calls on behalf of each accounts—in parallel—and combines the outcomes. Once we comply with the ensuing URLs with a purpose to reply or increase, we’ll accomplish that as particular person identities. And we’ll be capable of use Steampipe queries and dashboards in that very same single-user mode. However we’ll additionally be capable of pool our timelines and level our queries and dashboards on the mixed historical past.
Will that show attention-grabbing? Helpful? That continues to be to be seen. I believe it’s one among many experiments value making an attempt because the fediverse kinds itself out. And I see Steampipe as one laboratory through which to run such experiments. With SQL because the abstraction over APIs, aggregation of connections, and dashboards as code, you have got all of the components wanted to iterate quickly, at low price, towards shared Mastodon areas tailor-made for groups or teams.
This collection:
- Autonomy, packet measurement, friction, fanout, and velocity
- Mastodon, Steampipe, and RSS
- Looking the fediverse
- A Bloomberg terminal for Mastodon
- Create your personal Mastodon UX
- Lists and other people on Mastodon
- How many individuals in my Mastodon feed additionally tweeted right now?
- Occasion-qualified Mastodon URLs
- Mastodon relationship graphs
- Working with Mastodon lists
- Photographs thought-about dangerous (typically)
- Mapping the broader fediverse
- Protocols, APIs, and conventions
- Information within the fediverse
- Mapping folks and tags in Mastodon
- Visualizing Mastodon server moderation
- Mastodon timelines for groups
Copyright © 2023 IDG Communications, .