[colug-432] ABI and hardware architecture and OS match-ups
Rick Troth
rmt at casita.net
Mon Sep 26 10:23:49 EDT 2016
On 09/24/2016 02:31 PM, Jeff Frontz wrote:
> I googled "freebsd netbsd abi" and this was the top hit: NetBSD Binary
> Emulation (http://www.netbsd.org/docs/compat.html ). According to that
> page (under http://www.netbsd.org/docs/compat.html#ports -- "Which OSs can
> I emulate on my machine?") on i386 NetBSD you can run:
>
> - BSDI (up to BSDI 3.x binaries)
> - FreeBSD(x86) (a.out and ELF binaries)
> - IBCS2 systems
> - Interactive Unix
> - SCO Unix
> - SCO Xenix
> - Linux(x86)
> - Solaris(x86)
Thanks! Fabulous!
So what I'll do is add logic such that on NetBSD, if the 'setup' script
is run on NetBSD it will accept a FreeBSD or Linux or Solaris build and
hope for the best. (But if it finds NetBSD, it will use that, of
course.) The point being that I can defer building packages on NetBSD,
though I'll test that one of the others is usable there.
Juicy articles deleted for brevity.
I probably should have known that CYGWIN-built might not run in a MinGW
world because I know all too well that CYGWIN linkage by default
requires cygwin1.dll to create the Unix-like environment. Pretty sure
that some trick in how the DLL is loaded results in the cross-process
"stuff" persisting, which would be more difficult with static linkage.
I've never dug into the details. /Maybe/ MinGW-built can run on CYGWIN
(?), but if one of the things lost is sym-links then I'm kind of outta
luck. Lotso things require sym-links.
> But, really, what is the problem you're trying to solve? ...
That is the $64000 question. Details at end.
The short of it is to do _something like Mac OS X "Home Brew"_ but for
any Unix or Unix-like environment.
You can [re]build from source if you have to, but you might be able to
find and run a pre-compiled copy of whatever package you're after.
Let me re-phrase that: I'm already doing something sorta like Home Brew
and it is getting out of hand.
The ABI question is an attempt to reign it in.
> ... Do you want to
> just build something or do you want to actually test it as well? For
> build-only, you could set up cross-compilation/build environments that are
> radically different from the hosted environment (GNU's SDK obviously
> supports this -- that's how the virus spreads ;-). But to actually
> test/certify that something runs in another environment (even just a
> different version of the same environment), you really need to have that
> target environment running (if only in a virtual machine).
Testing is good. Testing is wise. "Test early. Test often."
This conversation helps: I see a gap in my plan. I don't test all the
platforms where a build /might/ work. For those, it's usually manual
"try it", and then re-do a specific build for that platform. But this is
both manual and late in the game.
> Maybe it's that you want the fewest number of packages?
When building from source, for any single /package/, I want to have as
many /platforms/ in the can as possible, without falling into per-distro
type complexity common with Linux. I have a good handle on building for
Linux w/o distro or runtime sensitivity, but have been away from the
others for too long (and the field is bigger, the BSDs have grown up).
----- cut here -----
Previous work environments were heavy with NFS and automounter. We could
build some FOSS package and just toss it out there for any Unix system
to mount and run. It was/is easy to identify the package (and release)
and the available platforms. In the early days, I sometimes found the
need to identify the platform release (e.g., Solaris-5.1.1 versus
Solaris-5.1.2) but that was rare. Platform release never worked for
Linux because the kernel and the runtime library are independent (and
runtime library was always a bigger factor for the "will it run?"
question). Over time, more operating systems supported other hardware,
OS X on PPC and now X86_64/AMD64, Solaris on SPARC and *86*, Linux on
/anything/.
We don't use shared filesystems as much now (everything is file synch
instead), but we do sometimes use shared disks. (Think removable media,
but also SAN/iSCSI or shared virtual.) Then too, containers are a new
reason to again share filesystems (whether on disk or networked or
bind/alias/shadow).
All of this is _outside of the usual package manglement_, which would
complicate handling packages outside of "vendor space" (or maybe
"distributor space") and is also tough for people to do w/o admin privs.
Example:
At the PGP key signing, someone might come with a stash of CD or flash
media with GPG 4.1.20 pre-compiled for Linux and Windoze and OS X. The
way I'd do that is to have ...
* an ISO-9660 FS
* with a directory called "gnupg-4.1.20"
* and sub-directories "Linux-i386" and "CYGWIN-i386" and "Darwin-i386"
Maybe other HW too if time and resources allowed for them. AMD64/X86_64
is a bi-modal architecture so "-i386" runs there, but we could also have
"-x86_64" directories to go along with these three example systems. It's
all "just make" under the covers.
A CD with GPG is one of a bazillion point-and-run examples. Flash, NFS,
SMB, shared disk images, shared LUNs (SAN or iSCSI or whatever) ... or
just pull pre-built stuff from a repo via TAR or RSYNC. So we're back to
file synch again, full circle.
-- R; <><
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.colug.net/pipermail/colug-432/attachments/20160926/596bc1bb/attachment-0001.html
More information about the colug-432
mailing list