Extracting and displaying data from urlload result

Sorry, novice here.

I've figured out how to use urlload to display the page contents however, what I need to do is check a series of links and only display the one that has a matching string. Is this possible?

Hi, and welcome to the forum! :slight_smile:

The following example will load 3 pages and print the urls for the pages that contain the world "Example".

(Note this forum preview doesn't support the urlload command so the preview below will have errors but should work in the dashboard)

Load the pages:
{repeat: for url in ["path1", "path2", "path3"]; locals=pages}
{urlload: http://example.com/{=url}; done=res -> ["html"=res]}
{endrepeat}

Find the pages that contain "Example":
{=map(filter(pages, x -> contains(x.html, "Example")), x -> x.url)}

This is amazing! Thank you so much! I've attempted to use the code from your urlload weather example to adjust the ending of the urls. I've also tried using the clipboard entry to do so. Is there any adjustment to the clipboard command to get that to insert? or is this not possible within the repeat call?

I'm sorry, I'll clarify. For each URL, I'd like to append the clipboard contents to the end.

I was able to add the clipboard call right after the {=url} in the urlload. works perfectly! I really appreciate your help.

One last question though. The result of the filter produces one url. Which is the goal. However I'm struggling with how to convert that result into a clickable link.

So if you are using the code like above, the end product will be a list of strings (urls). If we know that list will only contain a single url, we can convert it to a clickable link like so:

Configure some test data like you'll end up with above:
{urls=["https://example.com/a"]}

The link: {link: {=urls[1]}}{=urls[1]}{endlink}
The same link formatted differently: {link: {=urls[1]}}A link!{endlink}

More information about the link command is available here:

Hey Scott!

Tagging in for a second - Peter is out for the day and I'm trying to catch up with this!

The issue we're running into is we modified the above snippet code, to call the URLs based off the clipboard being the /(missing piece)

So it finds the URL that the "token" belongs to and outputs it in the result as

["https://example.example.com/exmple2/customers/"]

When we need to take that Token from the clipboard and add it after the / from the URL it loads.

When we try to add it in to {=map(filter(pages, x -> contains(x.html, "Status:")), x -> x.url))} at the end of the code above - it does not add it within and if we add it after it creates a result that lists the link in brackets and then just writes the clipboard after it.

Is there a way to like "merge" these two functions so it can add the clipboard content to the end - then we can turn it into a clickable link?

Sorry Steven, I don't 100% understand, but let me know if this is what you are looking for.

Get the token from the clipboard (could do some processing at this point if you needed to select a specific part of the clipboard):
{token={clipboard}}

Configure some test data like you'll end up after the loading and filtering operation above:
{urls=["https://example.com/a"]}

The link: {link: {=urls[1] & token}}{=urls[1] & token}{endlink}

The same link formatted differently: {link: {=urls[1] & token}}A link!{endlink}

Hey!

I actually figured it out later last night! And I got it to work the way we need.

The only other thing we could possibly need - is there a way once it outputs the correct link, which it is doing now - that we can then "pull" from the url or from the page itself what # page it is on?

Like there are 1-9 "shards" when we click the URL it opens and tells you which one you're on and the # is also in the URL link -

We would hope that once it determines say it's on shard 3 - that later in the snippet it can then put 3 as the shard number in our response it's creating.

Currently it's a form menu and you select 1-9, but if we could automate that, that would be ideal.

TL:DR the query is working and we can then click the url to open the page - but is it possible to scan that page for the shard # and put it also into the snippet results, after the correct url is deteremined?

You've already loaded all the potential urls right? So you should be able to pull the shard directly from the matching page you loaded in your initial {urlloads}.

So basically you will get some modified version of the first example above. Something like this:

Load the pages:

{repeat: for url in ["path1", "path2", "path3"]; locals=pages}
{urlload: https://example.com/{=url}; done=res -> ["html"=res]}
{endrepeat}

Find the pages that contain "Example" (using "find()" instead of "filter()" as we should only have one match):
{page=find(pages, x -> contains(x.html, "Example"))}

Get url:
{=page.url}

Get shard (this example gets the contents of the first H1 tag -- change this regex for what is suitable to get the shard in your case):
{=extractregex(page.html, "<h1>(.*)</h1>")}

Does this help?