you can add an if key to a job to conditionally run jobs. you also have a lot of metadata available in github actions regarding the event that triggered it and the repo it is on.
because uv supports running scripts with dependencies declared in inline metadata and typer can turn any function into a cli you can put both of them together and build some really powerful small utilities. all you need is to define a function and wrap it in typer.run() in a script with typer as a dependency in the inline metadata.
after some iterations, this is the final script (so far):
issue-to-md.py
# /// script# dependencies = [# "typer",# "rich",# "pyyaml",# ]# ///importjsonimportrefromdatetimeimportdatetimefrompathlibimportPathfromzoneinfoimportZoneInfoimporttyperimportyamlfromrichimportprintfromtyping_extensionsimportAnnotateddefgenerate_post_from_issue(issue_title:Annotated[str,typer.Option("--title","-t")],issue_body:Annotated[str,typer.Option("--body","-b")],issue_labels:Annotated[str,typer.Option("--labels","-l")],issue_created_at:Annotated[str,typer.Option("--created-at","-c")],base_dir:Annotated[str,typer.Option("--base-dir","-d")]="blog/posts",):# Convert labels to a list of tagstags=[label["name"]forlabelinjson.loads(issue_labels)]# Convert ISSUE_CREATED_AT to PST and format as YYYY-MM-DDutc_time=datetime.strptime(issue_created_at,"%Y-%m-%dT%H:%M:%SZ")pst_time=utc_time.astimezone(ZoneInfo("America/Los_Angeles"))created_at_pst=pst_time.date()# Extract the category from the part of the title before the first colon, default to "project" if nonecategory=(issue_title.split(":")[0].strip().lower()if":"inissue_titleelse"project")# Extract the title content after the first colontitle=(issue_title.split(":",1)[1].strip()if":"inissue_titleelseissue_title.strip())# Determine directory based on categorydir_path=Path(base_dir)/("til"ifcategory=="til"else"")dir_path.mkdir(parents=True,exist_ok=True)# Generate a slugified version of the title for the filenameslug=re.sub(r"[^a-z0-9]+","-",title.lower()).strip("-")# Create the front matter dictionaryfront_matter={"title":title,"date":created_at_pst,"categories":[category],"tags":tags,}# Prepare YAML front matter and issue bodyyaml_front_matter=yaml.dump(front_matter,default_flow_style=False)content=f"---\n{yaml_front_matter}---\n\n{issue_body}"# Define filenamefilename=dir_path/f"{slug}.md"# Write content to filefilename.write_text(content,encoding="utf-8")print(f"Markdown file created: {filename}")if__name__=="__main__":typer.run(generate_post_from_issue)
you can automate creating a new markdown file in a directory in your repo with front matter metadata from github issues. you can then create a pull request to deploy those changes to your main branch. my plan is to use this to capture more ideas on the go (on my phone).
.github/workflows/issue-to-md.yml
name:Create Post from Issuepermissions:contents:writepull-requests:writeon:issues:types:[opened]jobs:create-post:runs-on:ubuntu-lateststeps:-name:Checkout repositoryuses:actions/checkout@v4-name:Generate Post from Issueenv:ISSUE_NUMBER:${{ github.event.issue.number }}ISSUE_TITLE:${{ github.event.issue.title }}ISSUE_BODY:${{ github.event.issue.body }}ISSUE_LABELS:${{ toJson(github.event.issue.labels) }}ISSUE_CREATED_AT:${{ github.event.issue.created_at }}run:|# Convert labels to a list of tagsTAGS=$(echo $ISSUE_LABELS | jq -r '.[] | .name' | paste -sd, -)# Convert ISSUE_CREATED_AT to PST and format as YYYY-MM-DDCREATED_AT_PST=$(TZ="America/Los_Angeles" date -d "${ISSUE_CREATED_AT}" +"%Y-%m-%d")# Extract the category from the part of the title before the first colon, default to "project" if noneCATEGORY=$(echo "$ISSUE_TITLE" | awk -F: '{print $1}' | tr -d '[:space:]' | tr '[:upper:]' '[:lower:]')if [ -z "$CATEGORY" ]; thenCATEGORY="project"fi# Extract the title content after the first colonTITLE=$(echo "$ISSUE_TITLE" | sed 's/^[^:]*: *//')# Determine directory based on categoryif [ "$CATEGORY" = "til" ]; thenDIR="blog/posts/til"elseDIR="blog/posts"fiecho $DIR >> $GITHUB_STEP_SUMMARYecho $CATEGORY >> $GITHUB_STEP_SUMMARY# Generate a slugified version of the title for the filenameSLUG=$(echo "$TITLE" | tr '[:upper:]' '[:lower:]' | tr -cs '[:alnum:]' '-' | sed 's/^-//;s/-$//')echo $SLUG >> $GITHUB_STEP_SUMMARY# Create the front matter with category, tags, and formatted dateFRONT_MATTER="---\ntitle: \"$TITLE\"\ndate: ${CREATED_AT_PST}\ncategories: [${CATEGORY}]\ntags: [${TAGS}]\n---"# Prepare content for markdown fileCONTENT="$FRONT_MATTER\n\n$ISSUE_BODY"# Save the content to a markdown fileFILENAME="${DIR}/${SLUG}.md"echo $FILENAME >> $GITHUB_STEP_SUMMARYecho -e "$CONTENT" > "$FILENAME"-name:Commit and push changesenv:ISSUE_TITLE:${{ github.event.issue.title }}ISSUE_NUMBER:${{ github.event.issue.number }}GH_TOKEN:${{ github.token }}run:|git config --local user.name "github-actions[bot]"git config --local user.email "github-actions[bot]@users.noreply.github.com"git checkout -b add-post-$ISSUE_NUMBERgit add .git commit -m "Add post for Issue: $ISSUE_TITLE"git push -u origin add-post-$ISSUE_NUMBERgh pr create --title "#$ISSUE_NUMBER - $ISSUE_TITLE" --body "Adding new post. Closes #$ISSUE_NUMBER"
you can configure your VPS / server to be able to run sudo commands without being asked for your password. you just need to create a sudoers file.
first you have to create sudoers file
sudovisudo-f/etc/sudoers.d/$USER
when i asked chatgpt for this i found you can just run sudo visudo and it’ll open the sudoers file.
now, let’s say you have a user app that you want to be able to run apt update and apt upgrade without asking for sudo password. you need to add this line to your sudoers file
you can open a specific channel/DM conversation on the slack app using the open website action on stream deck but pointing ot to slack://channel?team=<Workspace ID>&id=<Channel / teammate ID>.
you can create aliases in the GitHub CLI. i'm not super familiar with aliases. i've used them in the past to automate long commands.
currently i'm using a couple at work to shorten dbt commmands ever so slightly (from dbt run --target prod --select <models> to prod-run <selection query>).
however, i had only seen these as aliases one sets up at the profile level/scope. as in, we'd go to ~/.bash_profile or ~/.zsh_profile and add a new alias that's set every time we open a new terminal.
this is the first time i see a cli offer that within the tool itself. i wonder if this is a common practice i've missed until now.
in the GitHub cli you can use the command alias set to set an alias (docs).
i usually have to google the full list of flags i would like to run when creating a repo via the gh-cli so i figured i'd save it as an alias now. this is why i ~~wish i remembered~~ would like to run most times:
simply create a public repo named include a ReadME, a license and a gitignore file and finally clone it to the local directory.
i might add the --disable-wiki simply because i don't use the wikis.
from the docs:
The expansion may specify additional arguments and flags. If the expansion includes positional placeholders such as "$1", extra arguments that follow the alias will be inserted appropriately. Otherwise, extra arguments will be appended to the expanded command.
I learned to chain a lot of small tools using GitHub Actions to produce ready-to-share images of code examples for social media (namely, instagram and twitter) from my phone. The steps, generally speaking, go as follows:
Create a new page on a Notion Database. Probably will create a specific template for this, like I do with TIL’s but it’s not necessary.
GitHub Action: Use my markdownify-notion python package to write the markdown version of this page and save it on a “quarto project” folder. This let’s me use one general front-matter yaml file for all files rather than automate adding front matter to each file. I can still add specific front matter to files if I want to. (this TIL is an example of how this works - I’m writing it on Notion on my phone.)
GitHub Action: Use Quarto to render this markdown file --to html and save it on an “output” directory. This will execute the code in the code cells and save the output inline.
GitHub Action: Use shot-scraper to produce two files: a png screenshot and a pdf file. I’m using shot-scraper for the PDF as well rather than using quarto because it’s easier and I am not in need of customizing this pdf file at all just yet. I’m creating it and saving it essentially just because I can, it’s easy, and might find use for it later.
GitHub Action: Once there are new png or pdf files in the “output” directory, I then use s3-credentials to put those objects on a S3 bucket I also created using s3-credentials . This tool is fantastic s3-credentials.readthedocs.io