Automate Posting Hugo Blog to Social Sites... Failure
How not to automate posting to social sites
2 Minutes, 25 Seconds
2024-06-15 00:00 +0000
Why
I have a hugo blog that is a pian to share across my social feeds. I want to automate it.
Create a mockup
For this I quickly sketched out my thoughts onto a writing pad. My thinking is that I will drop a yaml file into each post directory to be read by a a python application calling social apis.
Mockup
As you can see this is very rudimentary. I will traverse the posts directories looking for a publish.yaml file that will help a python program to auto-generate some arguments to pass to some apis.
Mockup a yaml file
So I’m thinking that my file will have a pretty basic structure something like.
mastodon: true
x : true
linkedin : true
Mockup the json config file
I do not know which arguments the apis will need but I’m going to guess I’ll need something like…
- Post body
- Tags
I might need some other open graph items, but I am hoping that these can be pulled from the rendered html.. we’ll see.
{
"urls: [
{
"post-url": "$GENERATED BY DIRECTORY",
"post-body":"GENERATED BY POST DESCRIPTION",
"post-tags": "GENERATED BY POST TAGS",
"mastodon" :{
"publish":true,
"publish-date":epoch_date,
"publish-data": Whatever is returned by the api
}
}
]
Mockup the python publisher
I am sure there is a better way to do this. Maybe a more dynamic way? I am just writing quickly to get this done. I’ll 100% revisit this when I actually write the program and experiment.
for url in urls:
args = "\n".join([url['post-body'],url['post-url'],"# ".join(url['post-tags'])])
if url['mastodon']]["publish"]:
publish_mastodon_article(args=args)
elif url['facebook']["publish"]:
publish_facebook_article(args=args)
Mockup the python directory walker
import os
content_dirs = ["content/posts"]
out-paths = []
for dir in content_dirs:
for root, dirs, files in os.walk(dir, topdown=False):
for name in files:
print(os.path.join(root, name))
files.append(os.path.join(root,name))
for path in out_paths
generate_json_config(path)
# for name in dirs:
# print(os.path.join(root, name))
Mockup the yaml reader
import pyyaml
def generate_json_config(path)
json_gibberish = {
"post-url": "jnapolitano.com/en/" + path,
"post-body": null,
"post-tags": null,
"mastodon" :{
"publish":false,
"publish-date":null,
"publish-data": null
}
}
try:
yaml_path = path + publish.yaml
with open(yaml_path, 'r') as yaml_gibberish:
#whatver the yaml code is insert here... basiclly just translate it over to json and return
yaml_json - yaml_gibberish.to_json()
try:
json_gibberish['mastodon']["publish"] = yaml_json["mastodon"]
except:
print("No mastodon setting passing default to no")
json_gibberish['mastodon']["publish"] = false
return json_gibberish
Scrap Everything and Change Course
So as I was writing the above, I came up with another solution. Writing out a mockup of the code made the entire thing feel to kludgy. I don’t like writing to file… keeping configs up to date.. etc. I will write an rss feed tracker that will just listen to the published site and then run the social jobs when the xml is updated.
My new approach
My thinking is that I will add some options to the front matter of the posts that can easily be scraped from the rendered html. The advantage of this approach is that it will permit me to always be up to date. I will not add a republish feature… yet. I might but that will be another option.
Stay tuned
I am breaking away from this post as I am changing direction. So in the mean time here is a photo of my cat.