Jump to content

Save 'ur URL


Recommended Posts

Save a selected URL to a text file (appending if the file exists)
 

Introduction
Sometimes when browsing you want to save a URL quickly for future reference without creating a bookmark in your browser. That is what this workflow does. For simplicity it uses a simple plain text file (which will be created for you if it doesn't already exist) to which saved URLs will be appended. You select the location folder for that file in the workflow configuration.

Usage
Using your Universal Action hotkey on a selected URL, select Save URL to links file from the list and press ⏎.

TheUA.thumb.png.e1b49363988402baa6644ef1f48297ae.png

You will then be prompted for a description of the URL (which may be a useful reminder). Type the description and press ⏎. If you wish you can leave the description blank by simply pressing ⏎.

Description.thumb.png.91e280da3b86ab7db6d93069d9e634b9.png
The result will be a text file (which you can open in your default text file editor—I'm using CotEditor here and added the "Saved links” heading manually when creating the file):

Result.thumb.png.0c7c7291fbaab981015909c8c880411a.png

 

Notes
1. In the workflow configuration you can choose the keyword you wish to use to open the Links.txt file.
2. If (quite understandably 😀) you loathe the sound effect you can, of course, mute it in the workflow configuration.

 

GitHub download link

 

Stephen

Link to comment

Version 2.4 represents a significant re-write of the workflow so that:

  • The first time you run the workflow the relevant plain text or markdown file is created with the heading Saved URLs.
  • It is now obligatory to include a description of the URL when saving to a markdown Links file but that remains optional when saving to a plain text links file.
  • The ReadMe has been updated and expanded.
  • The grammar in a couple of the dlalog boxes has been improved.
  • I have added a warning when you choose to create a new Links file (potentially deleting any previously saved links).

Stephen

Link to comment
  • 2 weeks later...

Version 3.0 is a significant update and adds the ability to search a Links.txt file and open any found URL directly from Alfred. Note that ability does not currently extend to a  Links file saved in markdown format.

Configuration options
You can choose:

  • The keyword you wish to use to trigger a search of the Links.txt file.
  • Whether you wish the selected URL to open in your default browser or (if you use Firefox) in a Firefox private window.

Searching URLs in a Links.txt file
Simply type your search keyword and the relevant URLs will display in Alfred's window. (Note that the search is case insensitive.) Press ⏎ to display the selected URL in your chosen browser. Here is an example of the result when I have chosen the configuration option to open a URL in a Firefox private window:

SearchResult.thumb.png.3658dcc1c8d69e7ffe1a7f9ffc0b2fbc.png
You will be warned if:
- the Links.txt file does not exist; or
- your search term is not found.


Notes
1. I am indebted to @vitor for huge help with the script filter.
2. If anyone more skilled than I is interested in contributing amendments to the script filter to:

  • detect use of a markdown links file; and
  • grep for searched links within that file to extract the URLs

all help will be gratefully received and will be acknowledged!

 

Stephen

Link to comment

Although version 3.2 is a point release it contains one significant enhancement and a significant bug fix.

Version 3.1 (which was not released) extended to markdown Links files the ability to search for URLs and open a selected link (with thanks to @vitor for conducting me to a missing exit 😀).

This version also fixes an irritating and persistent bug which, when searching a Links file, on occasion led to Alfred showing default fallback searches as the search result.

Stephen

Link to comment
  • 4 weeks later...

@Stephen_C In order to extract the title of a webpage as a prefill for the prompt, I added the following Python script as a WF step. It would be great to add it to your workflow…

 

 

image.thumb.png.6ca2954769cc966a2b3642d4a6d34ec8.png

 

Python3 script:

 

import urllib.request


import re
import os


def fetch_webpage_title(url):
    try:
        # Fetch the webpage content
        response = urllib.request.urlopen(url)
        html = response.read().decode('utf-8',errors='ignore')
        
        # Use regex to find the title tag
        title_match = re.search('<title>(.*?)</title>', html, re.IGNORECASE)
        
        # Extract the title if found
        if title_match:
            return title_match.group(1).strip()
        else:
            return ""
    except Exception as e:
        return f"An error occurred: {e}"


url = os.getenv('theURL')
print(fetch_webpage_title(url))


Link to comment

@Acidham thank you for that. I've been testing it for a while. It's potentially really useful. However, I'd prefer to have a rather more elegant failure fallback. 😁

 

By way of example, the following sites fail leaving the error message as the description (and I'm not sure that's ideal: would it not be better simply to leave the field blank for the user to complete and explain that in the confiiguration?):

https://www.macbartender.com/Bartender5/
https://support.captureone.com/hc/en-us/community/topics
https://webbtelescope.org/images
https://apod.nasa.gov/apod/astropix.html

 

This link simply produces a blank with no error in the description field (perhaps better?):

https://support.mozilla.org/en-US/kb/getting-started-thunderbird-main-window-supernova#w_2-unified-toolbar

 

This site produces a rather odd result:

http:// https://www.stclairsoft.com/blog/default-folder-x-6-new-features/

 

If possible, I'd prefer a more uniform approach in respect of uncooperative sites (however they may be defined) before introducing and releasing this (which is not to detract from the fact that it has great potential).

 

(I apologise for the fact that I've not used Python for very many years so am now rather too rusty to tackle any re-progamming myself!)

 

Stephen

Link to comment

@Stephen_C uups sorry forgot to return an empty string instead of returning the error. And shame on me, I did not test it enough.

 

i changed the line return f"An error occurred: {e}" to return ""

 

import urllib.request

import re
import os

def fetch_webpage_title(url):
    try:
        # Fetch the webpage content
        response = urllib.request.urlopen(url)
        html = response.read().decode('utf-8',errors='ignore')
        
        # Use regex to find the title tag
        title_match = re.search('<title>(.*?)</title>', html, re.IGNORECASE)
        
        # Extract the title if found
        if title_match:
            return title_match.group(1).strip()
        else:
            return ""
    except Exception as e:
        return ""


url = os.getenv('theURL')
print(fetch_webpage_title(url))

Edited by Acidham
Link to comment

@Acidham thanks, that is better but there are still problems with some sites. Is there any way of dealing more neatly with the URL for this Alfred page (i.e., the one you are on now), for example (i.e., where there are certain punctuation marks in the URL)?

 

Also, please try these two URLs and note that an extra line appears to be added to the description field:

https://e-life.co.uk/login

https://www.eyecarepartners.co.uk/

 

Stephen

Link to comment

To be quite clear I'm adding this ability to the workflow as an option but I'd just like to get the extraction as robust as reasonably possible before releasing the update. I much appreciate your assistance.

 

Stephen

Edited by Stephen_C
Link to comment

Version 4.0 has been released: with credit to @Acidham for the major new feature:

 

If you check the relevant box in the user configuration the workflow will attempt to extract the title of the web page for which you are saving the URL and put that title in the description field. If the workflow is unable for any reason to retrieve the title the description field will simply be left blank for you to complete.

If you prefer to complete the description field yourself simply leave unchecked the relevant check box in the user configuration.

 

I have also updated the ReadMe and ensured both it and the user configuration are now in a rather more logical order.

 

Stephen

Link to comment

@Stephen_C I just shared via direct message a fixed code snipped, but I am uncertain if the message was ever sent. Therefore, here again. The new code should fix most of the issues when reading the title of a webpage. Just replace the code in WF with the code below.

 

import urllib.request


import re
import os
import http.cookiejar


def fetch_webpage_title(url):
    try:
        cj = http.cookiejar.CookieJar()
        opener = urllib.request.build_opener(urllib.request.HTTPCookieProcessor(cj))
       
        headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.77 Safari/537.3'}
        request = urllib.request.Request(url, headers=headers)
        response = opener.open(request)


        
        # response = urllib.request.urlopen(url)
        html = response.read().decode('utf-8',errors='ignore')
    
        title_match = re.search('<title>(.*?)</title>', html, re.IGNORECASE)
        
        # Extract the title if found
        if title_match:
            return title_match.group(1).strip()
        else:
            return ""
    except Exception as e:
        return ""


url = os.getenv('theURL')
print(fetch_webpage_title(url))


 

Link to comment
  • 5 months later...

Version 4.5 requires Alfred 5.5 so if you are not using that please stay on your earlier version.

 

The new version allows preview of plain text and markdown links files using Alfred's new Text View. Editing of plain text links files also uses Alfred's text view. Full instructions for use of the workflow are contained in the ReadMe.

 

Stephen

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...