Hacker News new | past | comments | ask | show | jobs | submit | otoolep's comments login

rqlite author here. Happy to answer any questions about it.

rqlite author here. Happy to answer any questions about it.

https://github.com/rqlite/rqlite


Are you planning on adding websockets or something similar in the near future to support things like ie. data change notifications [0]?

[0] https://www.sqlite.org/c3ref/update_hook.html


Not quite as simple as that. I could have still used the Hashicorp code wrongly.

And once I did: https://github.com/rqlite/rqlite/issues/5

aphyr himself chimed in.



I'm not sure how folks use it, but I think the sweet spot is for simple-to-run relational storage for a smallish set of data.

Some people don't use it for the distribution, but just like a HTTP API in front of SQLite.

https://github.com/rqlite/rqlite/blob/master/DOC/FAQ.md#why-...


"I'm not sure how folks use it"

Suggestion: have you considered running office hours and inviting your users to chat to you about what they're doing with it?

I've been doing that for six months for my Datasette project and I've had over 60 conversations now, it's been a revelation - it almost completely solved the "I don't know how people are using this" problem for me, and gave me a ton of ideas for future directions for the project.

I wrote more about that here: https://simonwillison.net/2021/Feb/19/office-hours/


Interesting! An old colleague of mine, Ben Johnson, also does the same thing for litestream, his latest SQLite replication project. I thought it was just him.

Now that I know two folks do it, I'll have to give it serious thought. Thanks for the blog post ref

https://github.com/benbjohnson/litestream


I believe Ben got the idea from me - I was also his first ever office hours appointment once he started :)


Thats close to our use case. We use it to sync a small amount of configuration data across a small amount of servers (2-50, depending client).

RQlite was perfect as our software is an addon for a legacy platform. We needed an easy low-access way of installing a distributed datastore.


Cool. Did you use read-only nodes, by any chance?

https://github.com/rqlite/rqlite/blob/master/DOC/READ_ONLY_N...


I haven't, I'm not super familiar with those systems TBH.


Whatever SQLite exposes is available in rqlite.


rqlite author here. Yes, it's coming in a future release and is much easier to do now.

One key principle of rqlite has always been quality, clean design, and simplicity of operation. So I've been reluctant to add a feature -- in this case Request Forwarding -- until I was sure it would be a clear win and not make rqlite less robust. After years of experience with the system now, I'm happy it can be added in a high-quality manner.


It definitely makes working with round robin proxies (e.g. k8s services) much, much simpler.

Having proxies be aware of the leader, or having clients being able to access nodes directly instead of behind said proxies, is a lot more complexity.


rqlite author here.

Yes, you're right every node knows the Raft network address of every other node. But Raft network addresses are not the network address used by clients to query the cluster. Instead every node also exposes a HTTP API for queries.

So code needs exist to share information -- in this case the HTTP API addresses -- between nodes that the Raft layer doesn't handle.


Also the HTTP API URL isn't deterministic because a) the operator sets it for any given node, and b) over the lifetime of the cluster the entire set of nodes could change as nodes fail, are replaced, etc.


>At what cluster size and concurrency does asking every node break down?

None, a follower only needs to ask the leader. So regardless of the size of the cluster, in 6.0 querying a follower only introduces a single hop to the leader before responding to the client. While this hop was not required in earlier versions, earlier versions had to maintain state -- and stateful systems are generally more prone to bugs.


I am curious about where things broke down with the 301 based solution y'all used earlier.


I included details in the blog post, the 3.x to 5.x design had the following issues:

- stateful system, with extra data stored in Raft. Always a chance for bugs with stateful systems.

- some corner cases whereby the state rqlite was storing got out of sync with the some other cluster configuration. Finding the root cause of these bugs could have been very time-consuming.

- certain failure cases happened during automatic cluster operations, meaning an operator mightn't notice and be able to deal with them. Now those failures cases -- while still very rare -- happen at query time. The operators know immediately sometime is up, and can deal with the problem there and then, usually by just re-issuing the query.


rqlite author here. https://github.com/rqlite/rqlite/

Yes, there is no reason why that wouldn't work. rqlite supports on-disk mode so you could run litestream alongside an rqlite node and backup the underlying SQLite database to your favourite cloud provider using litestream.

The only downside is that rqlite performance is very sensitive to the number of writes to disk, and that's why rqlite uses an in-memory SQLite database by default, and lets the Raft log persist to disk. In exchange you can be sure that your writes are persisted to disk when the API acks your request. Perhaps if rqlite used a RAM-based filesystem for everything, and you combined it with litestream you could get much higher-performance from rqlite, with backup (with only a tiny window for data loss) if you use litestream. It's not entirely trivial, however, due the difference in data consistency models. For example, which SQLite database should be updated if the Leader in the cluster changes? It gets complicated.

Ben has written a great program here. I wish I had his ideas! It's got me thinking. :-)


Check out this: https://github.com/boltdb/bolt

Always been considered a good example to study.


Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: