There are definitely things websites can do to not be so resource hungry,
which would in turn mitigate the impact of Linux desktop design weaknesses.
(That is not an excuse for Linux, though, as Linux distros are mentioned as
a way to extend the life of aging hardware.) The problem is we don't have
much leverage over websites. Simply blocking all scripts also breaks the
sites. Alternative sites aren't always an option.

I myself prefer web 1.0 over web 2.0 and its heavy scripts, wasted screen
real estate, endless scrolling webpages, poor contrast UIs, and other lousy
design 'features'. There was a time when a Pentium 166 MHz with 64 MB RAM
was enough for web browsing.


On Sun, Jul 26, 2020, 19:37 Rick Engebretson <eng at pinenet.com> wrote:

> Both you and Doug Reed suggest the problem is mostly on the browser
> side. My experience suggests it is on the web server side.
>
> The status bar on firefox shows connections with dozens of other web
> servers just to load what seems like a simple web page. My seamonkey
> browser still has the old fashioned blinking "stop" button on the
> toolbar, and downloading a web page these days seems like a long lasting
> busy connection. I even get grouched at by some sites for using the ad
> blocker; they say "how do you think we make our money, turn off your ad
> blocker." They even provide a button on their grouch box to turn the ad
> blocker off.
>
> Most financial transaction sites advise to close your browser after
> logging out. Not a bad idea. Like washing your hands.
>
>
>
> Iznogoud wrote:
> > Very interesting points. Browsing webpages needs gigabytes of RAM today.
> > I do not know how software development has got to be so irresponsible...
> > I do not remember who it was who intentionally gave slow computers to
> their
> > programmers to make sure they wrote efficient code. No such thinking
> today.
> > I think it is a market-oriented problem; components are cheap, RAM is
> cheap,
> > and all trouble stems from that. I will stop ranting about this now.
> >
> > I think that there are some bad design choices on the software side, like
> > relying on other components that bring their own latency, memory needs,
> and
> > problems to any one large software framework (say, Open/LibreOffice). I
> can
> > think of the dreaded dbus. Also, try running two separate firefoxes at
> once
> > under the same UID.
> >
> > But there are ideas. Controlling resources is a thing, and I think that
> > "containerizing" execution may help here. Appropriate resources can be
> > allocated per process, with caps on CPU time, I/O, etc. I do not know how
> > to do this off the top of my head, but if there is an OS that should do
> it
> > well for you, Linux is its name. Does anyone have a solution of this kind
> > to offer so I do not have to do endless browsing for it? Very interested.
> >
> > It is hard to force open-source developers to do you the favour and make
> > their software lean and robust to beyond what their testing suite
> extends.
> > The response to this is: "here is the code, fix what you do not like".
> >
> > _______________________________________________
> > TCLUG Mailing List - Minneapolis/St. Paul, Minnesota
> > tclug-list at mn-linux.org
> > http://mailman.mn-linux.org/mailman/listinfo/tclug-list
> >
> _______________________________________________
> TCLUG Mailing List - Minneapolis/St. Paul, Minnesota
> tclug-list at mn-linux.org
> http://mailman.mn-linux.org/mailman/listinfo/tclug-list
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.mn-linux.org/pipermail/tclug-list/attachments/20200726/a1fd72dc/attachment.htm>