Hey a question. Today I noticed that the css method generates links with the hostname and protocol attached.
This is a PITA for us because occasionally we want to proxy content (for example using ngrok for demos).
So my questions are why is Kirby forcing an absolute URL here? In general root-relative links are preferred are they not?
Any chance of making this a feature request? (The ability to prefer root-relative links?)
P.S. I’m aware I can override the functionality and am already doing it with the kirby-sri module, but I’m curious why this decision was made.
I asked the same question some months ago, and the upshot is that relative links are bad for SEO. I asked from the perspective of making the HTML smaller, since the full address is taking up extra bytes. I’ve not run into issues with proxies, though, I’ve used NGROK and Browsersync proxies just fine.
I read that article and to say it’s arguments are underwhelming is an understatement.
If parts of your staging site are publicly accessible, it’s very important to avoid using relative URLs for your site navigation links. If a single incorrect link gets deployed to production, then it may open up your entire staging site to being crawled, indexed and displayed in search results. Having multiple copies of your website floating around in the search results can pose a serious security threat on top of duplicate content issues.
This doesn’t make any sense. This is ONLY a risk if you use absolute links. If you use relative links, your staging site is never exposed if the ‘wrong’ link gets into production, since the hostname isn’t in the link.
@jimbobrjames, we’re trying to load our site from a local dev environment via ngrok on a mobile device. Since the mobile device only has access to the content through the ngrok tunnel, it can’t load the CSS which is being served from the local device name and not the ngrok tunnel name.
This is another argument I’ve found:
If you have all of your internal links as relative URLs, it would be very, very, very easy for a scraper to simply scrape your whole website and put it up on a new domain, and the whole website would just work. That sucks for you, and it’s great for that scraper. But unless you are out there doing public services for scrapers, for some reason, that’s probably not something that you want happening with your beautiful, hardworking, handcrafted website. That’s one reason. There is a scraper risk.
Soo… Somebody spends the time to scrape your entire site but then can be bothered to just replace the URLs?
I not sure this is much of an argument.
I just posted what I’ve read in the past. You can create a feature request on GitHub or just not use the functions provided by Kirby.
Does this just affect the CSS/JS methods or also images?
Wondering whether this could be solved with a config setting that set’s the url to your ngrok url when using ngrok via a multi-environment setup, haven’t tested this.
You can configure Kirby to use domain-relative URLs like so:
Doh. Of course that’s the right solution.