That works well but when it comes to password-protected pages I run into difficulties.
I looked a bit around and found a certain syntax that should work. Here´s an example:
I was asked for my PW and entered it.
Yet I got this response:
Password for user ‘Rosika’:
--2020-08-11 18:42:20-- https://itsfoss.community/t/[...]
Resolving itsfoss.community (itsfoss.community)... 178.128.172.26
Connecting to itsfoss.community (itsfoss.community)|178.128.172.26|:443... connected.
HTTP request sent, awaiting response... 404 Not Found
2020-08-11 18:42:25 ERROR 404: Not Found.
Theoretically this command should work but it doesn´t.
i tried the same with a page from personal messages. i tried my login email as well as username and even http instead of https, but all with the same result: 404 not found.
Normally that works very well but I can´t use this command if the page is password-protected.
The resulting .txt-file yields among other info:
“Oops! That page doesn’t exist or is private.”
I have got the credentials (user name and password), so that should be no problem.
But I´m at a loss as how to implement them in the command. I´m not sure about the syntax.
The problem is that you need a more specific request. Here is what works for me, but won’t work for you, because you need to gather your own HTTP request for this to work:
That’s the URL. But technically you are making a GET request and this can feature and even require more information than just the destination URL.
Press F12 when you are on the private message. Then go to the Network tab and look for the request initiated by BrowserTabChild*. Right-click the request and Copy as cURL (POSIX). Then paste & execute the acquired command in your terminal.
I´ve been trying to reproduce your instructions but wasn´t really successful (neither with chromium nor with firefox).
Never mind.
I think this - even if I could manage - wouldn´t be target-oriented for my purposes as I would have to log in via another browser (like chromium) first in order to obtain the command parameters with “Copy as cURL”.
In that case I could copy the contents of the page manually.
Perhaps I´ll still get something from the lynx-people…
Yet it keeps me bugging that I cannot get your curl-command working the way you did.
Despite what I said previously I´d like to be able to use that as well.
Therefore let me ask a bit further, if you don´t mind.
What browser did you use for procuring that long option for curl?
Why wouldn´t your exact command work with me?
I.e.: Why would I have to use a HTTP request of my own?
Because you need your own session_id to get permission to view the request. Every user connected here has their own id and is permitted to view certain parts of this forum, depending on the permissions associated with their accounts and therefore with their session_id.
first of all: thank you very much for that wonderful explanatory gif you created. What an excellent idea.
Finally I managed to follow your instructions. Yet I needed firefox to do this. Couldn´t reproduce it with chromium.
O.K. Thanks. Really didn´t know that.
So now that I managed to download the page as per your instructions I put a redirect at the end of the command (“> output.html”) to save it as an html-file.
Yet opening that one in a browser simply displayed an empty page!
So I tried a redirect to text (“> output.txt”).
Here the contents of the page are displayed but the output is pretty messed up as everything the html-file consists if of is shown.
But at least I got the command working which is success indeed.
It means you weren’t authenticated properly to view the message. Make sure your session_id is valid by getting the newest one in the way that is shown in the Gif.
O.K. I did that.
Applying [command] -O resulted in getting an html-file named “5209”. It consists of 86,3 kB data according to my file-manager.
But opening it in the browser presents an empty page once again.