Jump to content
VMSZealot

Old Mac, Modern Internet

Recommended Posts

Clearly. You're missing it by miles 100s of miles at least and being rather snide about it too.

Sorry, getting snide is a reflex that happens when someone is being (intentionally?) dense even after repeated attempts to enlighten them.

 

comprised mostly of text and images which, at a basic level, these machines are capable of rendering. Drawing text and pictures has not become any harder. Even so-called "interactive" pages are hardly a new thing, even if older browser don't support the newer mechanisms/approaches they did do those sorts of things then afaik.

 

No. I got interrupted in the midst of typing a reply to this, and in the meantime plenty of other smart people have replied so, well, I'll settle with "No, you're wrong, displaying this *rap has gotten way, way harder since the day when the only text format tags you could reliably expect to work were <center>, <b>, <i>, and <br>. It's frankly just not a sure thing anymore that the text-only content you might want to access exists as plain text bounded in <p> tags; even plaintext browser needs a minimum understanding of HTML 4+ conventions to fish that text out of the <div> and CSS sea it lives inside.

 

(And, as had been noted, many websites today simply don't load *any* content unless a Javascript engine goes and gets it. Note that websites like this are controversial for a number of reasons, a big one being accessibility to the visually impaired. In some cases those users are simply SOL, in others they may be able to use an alternative "mobile" site that degrades more gracefully, but still realistically might require more CPU horsepower to sort through than a Motorola 68000 can offer.)

 

 

This is not all like expecting Halo to run on an Atari 2600, it's far more akin to being surprised that an "enhanced" version of an old game won't run on the original hardware specs AT ALL even though all it does is bump the asset resolution/detail up a little bit

 

You seem to be *really stuck* on the idea that the "resolution" of the modern Internet has been bumped up just a "little bit" from 1990. Considering how many times this has been explained to you I'm kinda thinking it's not going to be possible to get you to grasp the scale of this problem.

Share this post


Link to post
Share on other sites
In any case, you are an asshole with your head stuck up that asshole. Just accept that what you see is not in fact reality at all, but only your viewpoint.

 

Aaand, I think you're done here.

 

EDIT/MOD NOTE:

 

I'm not locking this thread, yet, because the problem seems to be more personal than topical, but please turn down the heat. Babyish name calling is uncalled for.

Edited by Gorgonops

Share this post


Link to post
Share on other sites

In any case, you are an asshole with your head stuck up that asshole. Just accept that what you see is not in fact reality at all, but only your viewpoint.

 

Please refrain from using this kind of uncivil language, lest any more warnings need to be issues.

 

 

I'm not debating that there has been change, 

 

You do appear to in fact be debating that there has been changed.

 

 

but as far as I am concerned whether it's really a "platform" at all is debatable. 

 

Funny, every web developer ever thinks of it as a platform.

 

 

HTTP is still HTTP. There's still quite a bit of reliance on the basic model of requesting a page and then doing further requests if the page says it needs something else.

 

HTTP is HTTP, except when it's HTTPS, but as anthon mentions in a post they were writing probably at the same time you wrote this, what an HTTPD returns when you request is no longer content. It is an application, which itself then uses HTTP to request content from a database via a piece of middleware.

 

As I said somewhere, the fundamental model of how web pages function has changed.

 

 

Uh huh, sure. The size of table wouldn't overwhelm a processor, it would choke up the ram/hard disk. We aren't talking about the 80s though, we are talking about the 90s. 

 

Does it really matter which part of the overall computer is overwhelmed? The technicalities of which Mac IIs were introduced in which decade doesn't really change the fact that an '030 or an '040 would have trouble parsing a large stream of items. What TheWhiteFalcon described isn't just your computer receiving live data about, say, eight items. I'm presuming each client computer was receiving, parsing, then displaying a feed of information about hundreds of items on the page (even if not all were visible) at once.

 

Regardless, Amazon was able to quickly build this page that rendered correctly on any modern Windows, Mac, Linux, UNIX computer, and that would not have been possible in the '90s, with any technology, because web browsers couldn't do this stuff then, and because you simply couldn't build that application quickly enough. Any attempt would ultimately look like web browsers do today.

 

 

That is both disgusting and lazy. I don't see why I (or anyone else) should have to run a browser on top of my OS just to run some software. I'd call that unnecessary introduction of intermediate layers and it's probably less flexible than the JVM. Also, I think it's objectively terrible regardless of what you think.

 

I also agree, but it's how these applications were developed, and some of them (teams, slack) are not "optional." (Although: you are generally given a work computer good enough to run them well.)

 

 

Err, no. I am pointing out the basic elements are highly simplistic.

 

 

These things were simplistic in 1995. They are not today.

Share this post


Link to post
Share on other sites

 

Yeah, I remembered literally the minute after posting that that existed and regretted not using some other video game as an example (Just what are the cool kids playing these days?) because I knew someone would drag it up.

 

Although, actually, maybe discussing Halo 2600 could be sort of instructive. The author of the game wrote it after being inspired by a book about the *extreme* level of creativity employed by game programmers in working around the ridiculous-by-today's-standards hardware limitations of the 2600's hardware, and wanted to see first-hand what it was like. The result, is of course, a completely made-from-scratch creation that shares nothing but some thematic elements with the "real" Halo games. Therefore it, of course, has nearly zero similarity to the challenge you're facing when trying to write a browser for an ancient computer that can reasonably interpret the modern dynamic web. That is a challenge more akin to writing a Halo 2600 that fits a game engine into the 2600 hardware capable of interpreting all the 3D models, sounds, and other assets that the modern game engine uses, only degrading the presentation sufficiently to fit its pixel generation and sound chip limitations. (Presumably for this scenario you've either interfaced a hard disk to your 2600 or come up with a bank-switching technique for accessing several gigabytes worth of data through the 4k address window available for bank-switching on the Atari's cartridge slot.) Good luck with that! ;)

Share this post


Link to post
Share on other sites

I think that it helps if you have a specific use case. My specific use case isn't merely the bloody mindedness of using a 20-30 year old computer today.

 

I actually use my old Mac as part of the coding process for my new software - the reasons are twofold. Firstly, I'm too cheap to buy a new copy of Fontographer and emulation doesn't work brilliantly for 'real' work with this software. Secondly, i find the old OS very human scale and easy to get on with.

 

I don't have a need for internet banking or Facebook or, or, or. But if I'm on my old machine, I don't want to swap to the new one just for a spot of email or to look something up on Wikipedia, Stackoverflow or similar (including posting/reading here or downloading from Macintosh Garden). If I can get this basic functionality to work then it makes my day easier. And if it helps someone else then all the better.

 

Finally, I don't like the solutions which are out there already. They're unnecessarily kludgy or hard to use. Cutting my own solution also provides a welcome excuse to code, and I'd code my own breakfast if I could!

 

Above all though, I don't want to get all religious and warry about it. If my idea works for you then great. If it doesn't then it doesn't. There's no need to get fighty about it.

Share this post


Link to post
Share on other sites

We now have four posts who handle a little bit about the same. Two are discussing what could be made, one is almost finished with a product (mozilla for PPC) and one is offering something; legacyweb.net. These are the posts:

 

Internet on older macs

https://68kmla.org/forums/index.php?/topic/30760-internet-on-older-macs/

Old Mac, Modern Internet

https://68kmla.org/forums/index.php?/topic/30767-old-mac-modern-internet/

MLA Forum Access from Older Macs

https://68kmla.org/forums/index.php?/topic/30806-mla-forum-access-from-older-macs/

Building First Mozilla for PPC (and maybe eventually 68k)

https://68kmla.org/forums/index.php?/topic/30826-building-first-mozilla-for-ppc-and-maybe-eventually-68k/

 

I posted that I would like to test legacyweb.net and I will start a new topic for it. Maybe more people would like to test and respond.

https://68kmla.org/forums/index.php?/topic/30827-testing-the-legacywebnet-project-use-modern-web-on-older-softwarehardware/

Share this post


Link to post
Share on other sites

In fairness, I'm doing more than discussing. I'm writing too (and, within the next few weeks, I'll open source what I've done so that anyone who's hacked off with current solutions can help with something better).

 

This is more than idle chat. This is happening.

 

Legacy web seems like a nice idea - but it either doesn't work (from what I've read) or is tricky to configure. It also relies on stunnel which isn't the easiest software to use.

 

Mozilla for ppc already exists, of course, in the form of Classilla - but it's slow, and ppc only. I aim to fix this by using a helper machine (RPi) to get a fast, 68k solution working. Others may baulk at the idea of a solution which isn't running entirely on the Mac - but I think I have to accept that times (and security standards) have moved on and our old machines need a little help.

Edited by VMSZealot

Share this post


Link to post
Share on other sites

Here's my cheat sheet for the efforts being discussed. Please correct any errors.

 

Denodster is trying to build Mozilla for PPC, then will attempt to make a 68K version of that. 

 

Agent_js03 is developing something called LegacyWeb. It uses a Rasperry Pi as a proxy (if that's the right term). Secure email is accomplished through stunnel. Some type of unspecified proxy software filters and simplifies web pages on the fly.

 

VMS Zealot is developing a very similar project with no name (Old Mac, Modern Internet thread). It also uses a Raspberry Pi as a proxy, but the approach is slightly different. For this project, the RPi *is* the server instead of acting as a man in the middle. 

 

And then mactjaap has MacIPgw which has existed for some time. It can run on an Orange Pi, and it contains proxy components with a similar purpose to the others. Stunnel for email, http 1.1 to 1.0 proxy, web rendering proxy.

 

The efforts from Agent_js03 and VMS Zealot seem very similar to me. Maybe they can combine their efforts?

Share this post


Link to post
Share on other sites

As far as applications in the browser, I dunno I kinda got used to that. I use Google Sheets faithfully now because of the editing and live updates. Works great for schedules and progress at work. I keep a bunch of stuff on sheets for this. And I keep backup copies just in case something happens. 

 

and Google docs/forms is good for siphoning information at tradeshows. 

Edited by techknight

Share this post


Link to post
Share on other sites

With all the talk that's going on lately with old macs and internet this seems like an appropriate place to share what I think would be really cool. I wish that I could use for instance the old AOL software on one of my 68k macs but have current content delivered to the application. It would be a single interface to access email, get information from the web, and we already know the Client software runs on our machines.

 

So basically it would be setting up a server to allow connection from old mac running aol client and deliver current content. And the whole point that it would be a stripped down tool that's keeping you from having to deal with the complexities of the internet.

Share this post


Link to post
Share on other sites

@VMSZealot

 

Maybe my lack of good English synonyms made your eyebrows frowns! When I say “discuss"  I mean there is no actual prototype or piece of usable software. As for the legacyweb.net there is. We can test it and see what we think.

 

 If your solution is ready to test I volunteer as well!

 

When developing the MacIPgw and MacIPgw I also needed testers and they didn’t come in bundles. So I was very happy Mattis of the Datormuseum (computer museum) in Sweden did test it.

http://www.datormuseum.se/computers/apple/macintosh-plus

Share this post


Link to post
Share on other sites

 I wish that I could use for instance the old AOL software on one of my 68k macs but have current content delivered to the application. 

 

Now this is something I can get behind. Sort of how all the different iPhone apps deliver content in their own ways.

Share this post


Link to post
Share on other sites

I wish that I could use for instance the old AOL software on one of my 68k macs but have current content delivered to the application. It would be a single interface to access email, get information from the web, and we already know the Client software runs on our machines.

 

 

I think we've exceeded the maximum number of near-identical efforts that can be underway at 68kmla at the same time. :)

 

Competing case mod efforts: 4

Competing old mac, modern internet efforts: 5

 

segmentation fault, core dumped

Share this post


Link to post
Share on other sites

As I say, within the month I aim to get a universal installer out (when I say universal installer, i mean an installer that you can download and use to install my software on any Linux / Unix machine, be it PC, RPi, old PPC Mac…) and open source the code on Git Hub. Then you can try it, improve it, make suggestions or roll it into your own solutions.

 

The first version will handle email only, and specifically tested with iCloud and Gmail. The next version will extend to being a web proxy. Then I'd like to add syncing for contacts and calendar - and we'll see where we go from there.

 

Actually, and for the fun of it, if anyone knows the protocol specifications for eWorld perhaps we could resurrect that too!

Share this post


Link to post
Share on other sites

No information on the eWorld protocols?

 

Never mind.  Here's a progress update for you.  IMAPS functionality is now implemented and tested with Google.  I haven't tried it with iCloud yet.  I will, but first I want to get POP working.

 

Once POP is working, I'll get SMTP going - and then build an installer and put it on GitHub for you all to play with.

Share this post


Link to post
Share on other sites

Wow. Great minds think alike and all that. Reading that post though, it looks like it might be a clever collection of existing software. Certainly, it mentions using stunnel - which isn't the easiest software to use.

 

The solution that I'm writing will be self contained and, hopefully, a lot simpler to configure and set up. No tunnelling will happen at all - to the vintage computer the Pi will look like the server. When it makes a request to the Pi (when requesting a page, requesting email, sending email) it will send it to the Pi and then the Pi will process it and make a separate request of its own, using modern internet standards. When the Pi receives a response it will package it up appropriately and then return it to the requesting machine.

 

Sure, this approach will add a little latency - but not enough that you'd notice.

Hi,

I am the author of that project. If you look at the github, part of the appeal is that it uses ansible to automatically configure your stunnel for you. In fact, it configures pretty much everything for you. All you do is edit a configuration file, run it and bam, you are done.

 

I can see however that having everything configurable in the same place post-installation, such as through a UI, would be preferable. Maybe that should be your focus. There is no need to "reinvent the wheel" so to speak by writing software that all these other components already do. Just write a web interface or something where you can configure your mail settings, etc. and then click save, and it will automatically update all the configuration files.

 

I guarantee you it is a lot easier to install packages and edit configuration files than it is to write your own version of all these things.

Share this post


Link to post
Share on other sites

my thought in the past was using an external co-processor to do all the rendering, Like a server as you suggest, or a Pi, or something along the lines. 

 

But instead of coming back to the old browser as images, it needs to come back as full content, just in a language the older machines can understand. 

 

Almost like a translator of sorts. That would be one way. 

 

The other way would be a co-processor running the entire browser except for the UI, and its outputbuffer/displaybuffer gets dumped to a custom "browser" written to run on those old machines. So they work and act like todays browsers and are just as responsive, because all the heavy hitting is done on the copro. Then your mouse movement, key types, and mouse action are all sent as datagrams/packets/etc back to the browser engine running on the copro. browser processes those as if it were running on the native machine, and then browser responds like it normally would, say hovering over hamburger menus, etc... and re-renders what it needs to and dumps it back to the displaybuffer. 

 

Sure the UI then becomes a "terminal" at that point, but at least the internet browsing would be responsive. and the UI looks native instead of a window that is like an X-window remote connection. 

I have often thought of this as a solution as well. It is challenging because HTML/javascript/etc keeps changing constantly, plus to my understanding there is always more than one way to do something. So for example translating the newer 'divs' to tables and whatnot may not end up looking the way you want them to, not to mention that with javascript functionality increasing, you would have to somehow simulate the newer javascript functionality at the server level and then use the older javascript on the client. I can't think of any way of doing this other than manually.

Share this post


Link to post
Share on other sites

Hi,

I am the author of that project. If you look at the github, part of the appeal is that it uses ansible to automatically configure your stunnel for you. In fact, it configures pretty much everything for you. All you do is edit a configuration file, run it and bam, you are done.

 

I can see however that having everything configurable in the same place post-installation, such as through a UI, would be preferable. Maybe that should be your focus. There is no need to "reinvent the wheel" so to speak by writing software that all these other components already do. Just write a web interface or something where you can configure your mail settings, etc. and then click save, and it will automatically update all the configuration files.

 

I guarantee you it is a lot easier to install packages and edit configuration files than it is to write your own version of all these things.

My tool is coming along more slowly than I'd hoped (the pressures of work and all that!) but it can now fetch email from multiple accounts and serve them up to a classic Mac - any classic Mac capable of running an email client. As long as it can connect to the server (using Ethernet or SLIP) then my tool will work. I haven't put the code on Git yet because I haven't (yet) finished SMTP.

 

I confess that I couldn't work out how to make stunnel work, although that's probably just an example of me being dense. Example configuration files, with annotations describing what each setting does, for common email services like Gmail and iCloud, might be helpful.

 

That said though, unless I am missing something, I suspect that my tool might have value for some users - not least because a client (the classic Mac) can connect using POP3 and receive their email from an IMAP server. I don't believe that this is what stunnel was designed to be able to do.

 

Perhaps, though, if you could make stunnel easier to configure (or, at least, provide detailed examples covering the settings for stunnel for common email providers (as mentioned previously) and also the settings that need to be used in common Classic email software to work with stunnel (Eudora, Emailer, Outlook Express etc)), someone could roll up an easy to set up Linux server image to serve up internetty goodness to the retro computing community…

Share this post


Link to post
Share on other sites

Okay - so progress has been made and screenshots will be available soon. Claris Emailer and Eudora are both able to retrieve email from Gmail successfully (without enabling POP on Gmail). SMTP isn't working yet, and neither is entirely happy with displaying some modern email - fortunately though, owing to the way my software works, I can fix this - the email can be interpreted and reformatted when it gets sent to the vintage computer. That's for another day though - I want to get SMTP working first.

 

I think I've had a bit of a cunning plan for getting Internet working too. My thoughts are that vintage computers don't actually need to display modern internet. No one with a vintage computer is going to be doing Facebook, Internet Banking, ordering things from Amazon, or watching iPlayer. Browsing certainly - but a computer from 1995 (or earlier) isn't really going to need Internet more advanced than Internet from the late 20th Century.

 

That being the case, there's a ready source of Internet Pages good to go - no need for (much) reformatting. It's called Wayback Machine - and, yes, it's a little slow - but so are our computers. Here is how I think that it might work:

 

Raspberry Pi (or other computer) is the gateway.

Vintage computer makes a request for, say, www.apple.com

Gateway computer receives request and looks in its configuration to see what date page it should be requesting (for example, the gateway might be configured to request pages from 1998).

It will try to retrieve the page within that timespan but, if it can't get it, it will edge forward through time until it finds a page that matches.

Once it gets the page it removes the floating Wayback Machine bar and sends the page on to the requesting vintage computer.

 

The gateway computer would also have a simple search engine webpage on it to provide a launching point to Wayback Machine.

 

So, any thoughts anyone? Any objections to this method, or suggestions on how to make it work?

Share this post


Link to post
Share on other sites

Probably the first problem I could think of is that the information, together with the format, is regressed to e.g. 1998.  We're not running our Macs in 1998; we're running them in 2017.  What would that do to sites like 68kmla?  My 6360 was current in 1998, and my G4 didn't even exist!

Share this post


Link to post
Share on other sites

Here's my cheat sheet for the efforts being discussed. Please correct any errors.

 

Denodster is trying to build Mozilla for PPC, then will attempt to make a 68K version of that. 

 

Agent_js03 is developing something called LegacyWeb. It uses a Rasperry Pi as a proxy (if that's the right term). Secure email is accomplished through stunnel. Some type of unspecified proxy software filters and simplifies web pages on the fly.

 

VMS Zealot is developing a very similar project with no name (Old Mac, Modern Internet thread). It also uses a Raspberry Pi as a proxy, but the approach is slightly different. For this project, the RPi *is* the server instead of acting as a man in the middle. 

 

And then mactjaap has MacIPgw which has existed for some time. It can run on an Orange Pi, and it contains proxy components with a similar purpose to the others. Stunnel for email, http 1.1 to 1.0 proxy, web rendering proxy.

 

The efforts from Agent_js03 and VMS Zealot seem very similar to me. Maybe they can combine their efforts?

Clarification needed here...

I am not really using a "proxy" solution per se for web pages, I am developing individual pages that use APIs (such as Google Maps, weather, etc.) to serve that data to the older pc/mac.

 

Also as demonstrated by this thread:

 

https://68kmla.org/forums/index.php?/topic/30827-testing-the-legacywebnet-project-use-modern-web-on-older-softwarehardware/

 

the ansibleized is not fully working yet on the raspberry pi. I have modified it manually to work on my raspberry pi, but haven't checked in those changes yet or made them more "productionized." I plan to do this when I have the time. I am currently about to have two kids under 2 and also have some home related projects going on, so time is a commodity and I admit progress on this project is about to come to a very slow creep.

Share this post


Link to post
Share on other sites

With all the talk that's going on lately with old macs and internet this seems like an appropriate place to share what I think would be really cool. I wish that I could use for instance the old AOL software on one of my 68k macs but have current content delivered to the application. It would be a single interface to access email, get information from the web, and we already know the Client software runs on our machines.

 

So basically it would be setting up a server to allow connection from old mac running aol client and deliver current content. And the whole point that it would be a stripped down tool that's keeping you from having to deal with the complexities of the internet.

It's funny you mention it, I had actually started working a while back on a python project to mock the OSCAR protocol used by AOL instant messenger. The goal being that you could write another component that links to, say, facebook chat, and then delivers buddy lists, IMs, etc. to the AIM client as if it were to an AIM account. I had gotten as far as successfully mocking an login, and fetching some statistics. I then gave up on it. It's actually not horribly onerous, but for one guy with very little time on his hands, it got to be too much.

 

When I get home I will look on my personal laptop and find out what git repo I was committing to, if anyone is interested.

Share this post


Link to post
Share on other sites

Okay - so I still haven't entirely decided what it's called, and it's still only partially working, but here it is https://github.com/PascalHarris/NetGateway - have fun.

 

I've got rather a lot on at the moment, so I make no promises as to when I'll finish writing it, but it has tremendous scope for improvement (reformatting email into a more vintage friendly format, for example - as you can see, there are escape codes in the UTF which it definitely doesn't like) - and it may be that others have good ideas for how to improve it.  If you do want to improve it, be sure to make your commits with full and detailed comments - otherwise I'll reject them.  Not out of spite, but just because I don't have time to work out what your change does and why.

 

Be sure to read the Preliminary Instructions file first - it'll tell you how to install and configure this tool.  If you have any questions feel free - but, in the meantime, behold the LCIII reading modern Gmail.  Neat, huh?

 

PICTURE1.jpg

 

PICTURE2.jpg

Edited by VMSZealot

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×