XF 2.1 Link not unfurling

rolltidega

Member
What on earth is URL unfurling?

I'll just show you. It's easier!

View attachment 185717

When a URL is inserted into content and that URL exists on its own line within the content, we will "unfurl" it to a richer preview which includes the page title, metadata logo, description and favicon. Such rich previews add more context for users as to what the link contains. URL unfurling can be used anywhere that accepts BB code currently.

The functionality is enabled by default, but you can switch it off if you want to in the "Messages" section of the Admin CP.
What if you have a URL that does not unfurl? There is one site where any URLs we post don't unfurl.
 
What if you have a URL that does not unfurl? There is one site where any URLs we post don't unfurl.
If you go into the Admin Control Panel - Tools there's a tool you can use to test URLs that have unfurling issues to help diagnose the problems. But the CBC (Canadian Broadcasting Corporation) is like that and has been forever. Not much you can do if it is on the site's side. Some sites simply don't provide the data in a way that supports it.
 
This should probably be split into a new thread in the support forum but there's an issue with some websites that tend to time out on HTTP/1.1 requests. TikTok's servers used to do that (they respond to HTTP/2 requests but they hang indefinitely on HTTP/1.1) and it might be the case with Newsmax's server as well. If that's the case, configuring Guzzle to use HTTP/2 would fix it but that's not completely supported within Guzzle so it may cause some side effects.

Edit: just checked Newsmax's website, the reason unfurl'ing fails (at least on my local install) is because they reject any requests where the User-agent header contains a URL.
 
Last edited:
hah. i am facing a similar issue with npr links. the problem goes one step ahead. it just does not post. testing tool gives the same error message: Could not fetch metadata from URL with error: cURL error 28: Operation timed out after 6000 milliseconds with 0 bytes received (see http://curl.haxx.se/libcurl/c/libcurl-errors.html).

i tried in the test section here and the link does get posted (no unfurl though). i wonder what addon or server setting might be blocking the posting at all. it does get posted if i submit it as a bbcode link. anyways 🤷‍♀️
 
I started having issues over the past month with most CNN articles. However, it isn't a time out issue, but a "File too large" error...
Code:
The following error occurred while fetching metadata from URL https://www.cnn.com/2023/03/02/investing/premarket-stocks-trading

Could not fetch metadata from URL with error: File is too large.

Is there a way to increase the size allowance so that these CNN articles will unfurl, or am I misinterpreting the error?
 
Top Bottom