I’ve written a script to repost Github posts to Lemmy but now I’m trying to repost comments too and I don’t know how to get the corresponding post_id that I need to do that.
lemmy.comment.create(post_id, comment_body)
Is there a way to associate the Github URL to the post_id
when creating it with Pythorhead and then figuring out the Github URL from the comment and getting the post_id
that way?
Alternatively, would something like this work? If so how would I limit the search to a specific community?
import requests
# GitHub URL of the post
github_url = "https://github.com/user/repo/issues/123"
# Lemmy API endpoint to search for posts
lemmy_search_url = "https://lemmy.ml/api/v3/search"
# Query parameters for the search endpoint
params = {
"q": github_url,
"type": "link",
"sort": "relevance",
"limit": 1
}
# Send a GET request to the search endpoint
response = requests.get(lemmy_search_url, params=params)
# Parse the response JSON to get the post_id
post_id = response.json()["posts"][0]["id"]
# Use the post_id to create a comment in Lemmy
lemmy.comment.create(post_id, comment_body)
Are there any other ways?
Best way this can work is if your script kepts a record in its own DB of github posts it had made and then polled them for comments in the future. Store the github URL and the corresponding lemmy posts in sqlite. When the schedule runs, go through the github posts you’ve already added and check for comments. Any comment added post it on the corresponding lemmy post and store it in the DB as a relation of gitub comment -> lemmy comment. When checking, if the comment in github already exists in your DB, skip it.
The other thing to remember is that post IDs are relative to the Lemmy server you’re working with. So
post/12345
is almost surely notpost/12345
on another server.I mod a couple of communities on another server and this caught me off guard when trying to share what I thought were good URLs.
OP, you could even use a local file/sqlite database in the repo and just update and commit it when the script runs.
Simon Willison has a cool approach for this that runs in GitHub Actions and keeps the versioned state in git itself: https://simonwillison.net/2020/Oct/9/git-scraping/