To answer your question ... we first have to ask what is Unix?
Before I answer that, let me say that it is VERY likely that I do not have my facts right - so you out there who know the facts, please correct me, don't flame me!
As far as I know, the term UNIX can only apply to the operating system developed by Bell Labs in the 60's and anything derived from that. These include all System V derivatives like SCO, HP-UX and Solaris, but not the BSD derivatives (including the older SunOS and MacOS X), and CERTAINLY not Linux/GNU and it's derivatives. The very idea of GNU (from whence Linux sprang) was that GNU is Not Unix (it has been scientifically proven that computer scientist type people like playing with recursive acronyms).
The Unix trademark originally belonged to Bell Labs (now AT&T) and has successfully passed through a number of owners including SCO at one time, to now belong to an organization called X/Open which is the de facto standards' body for all things Unix.
Over the years, the differences of opinion between these three factions (System V, BSD and GNU/Linux) alone have been the cause of so many religious wars (largely along the lines of my dad can beat up your dad any day) that whole books could be written about them. In the old days, wandering minstrels and bards would probably have sung songs about the long dead heroes of these vicious battles.
The differences still exist - but in truth, there has been so much cross pollination that no one derivative is pure bred any more. All three have learned from and contributed immensely to each other's body of work (the beauty of open source and academia). For example, TCP/IP - the basis of the internet - was not developed fior System V but for BSD in the first place. It made it's way into System V (true Unix) many years after that. The bash shell is a fairly recent GNU development, but it can be seen in almost every flavour of Unix around.
In today's more enlightned times, we have come to realise that there is no way that EVERYONE will agree on ONE TRUE WAY to design or implement something. The closest that the X/Open group has come to a common standard for what is Unix is the POSIX specification. This largely specifies designs and best practices regarding what a true Unix should look like and work like. Amongst many other things, this means that programs written for one POSIX complient system shoudl easily be ported and compiled on another POSIX compliant system.
You could say that anything that is POSIX compliant is a true Unix.
Linux (with glibc) is POSIX compliant. The various BSDs are POSIX compliant. And of course Solaris, SCO, HP-UX, and I think AIX, are all POSIX compliant - and so all of these can be cionsidered to be real Unix systems.
However, the funny thing is that many systems that are certainly not Unix have also borrowed from the POSIX specification. Windows NT (and hence 2000 and XP) have POSIX subsystems. BeOS is largely POSIX compliant. Some IBM mainframe OSs (MVS, OS/390) are 100% POSIX compliant.
So ... although none of the above diatribe really answers your question, it should give you enough insight to realise that your question is not trivial!
Now, about those microkernels ...
Usually in any operating system, there are three levels of action - the hardware, the operating system, and the applications. The hardware is the physical processor, memory, hard drive, network adapter etc.
Imagine a computer as being a sandwich shop. You, the consumer, is a user (or rather a user-run application). The people behind the counter are the operating system, and the raw materials and tools are is the hardware and resources.
The applications you and I run (whether they are simple things like the Finder, or complex things like Photoshop) usually never deal with the hardware directly. This is for exactly the same reason that you would not be allowed to go behind the counter in a sandwich shop and start making your own sandwich from the raw materials there (although I am sure you are perfectly capable of making your own sandwich_ - the people who work there know where all the correct tools and raw materials are, they know how to use the tools properly, and they can ensure that people are served in the correct order. This is the advantage of platform independence. You can go into any shop and order a tuna sandwich on whole wheat bread - and you know pretty much what to expect - even though the location of the resources, and the exact technique of making the sandwich may be different from one shop to another
Now, some sandwich shops are small, and their sandwiches are not very complicated. Usually in such places, you would have just one person take your order, make your sandwich, and take money from you. This is similar to how Windows 3.1/95/98 and MacOS (up to 9.1) work. In the case of Windows 3.1 and MacOS before System 7, there would only be one person working there, so while your order was being processed, no one else would be able to get a sandwich. In later versions, you could have two or three people working there, but each person would take a customer all the way through the whole cycle. This kind of shop is prone to outages - what do you do when that one person is sick?, and is not scalable - to serve a hundred customers, you need a hundred times the space, and a hundred times the number or people working.
In some bigger sandwich shops, you have one person take your order, and s/he passes on your order to someone else who makes the sandwich, and they pass your sandwich on to the person who takes money, and you get your sandwich. This makes sense for a system where there are a lot of customers, and the sandwiches are a bit more complex so you have people who specialise in only making sandwiches. By taking your order and passing it on to a sandwich maker, the order taker is free to accept a new order while your sandwich is being made. This is the model that most Unix systems use. You, the user, place an order (request a task to be performed). The message is passed to the sandwich maker (the kernel) where the sandwich is made (the task is performed), and it is sent on to the cashier where it is handed to you for your consumption. You still have one person making your sandwich (a monolithic kernel), but the whole process is more efficient, more reliable and scalable. Also, the layers are independent of each other - you can replace one order taker with another order taker without having to replace th sandwich maker or the cashier. The same goes for the sandwich maker and the cashier - you can swap them out without having to retrain, or rebuild the complete process as long as they speak the same language and use the same terms (implement the same specifications).
Now, there is a third type of sandwich shop - usually even bigger, or more complex - where your sandwich is not made by just one person - you have one person who prepares the bread, another who adds the meat, a third who adds the condiments - and your order is passed from one person to another who fill in the appropriate item. The task of executing tasks is split up into smaller and smaller units - each one independent of the others and you can very easily add or remove people from the assembly line to cope with demand. This is a microkernel architecture - what MacOS X and other Mach based systems use.
Now that you are thoroughly confused with my obscure metaophors, I should probably answer your question ...
MacOS X is basically derived from BSD for the Mach microkernel running on PowerPC. Note the usage of the term "BSD for Mach" like you would have "BSD for x86", and "BSD for PowerPC". I am sure they could have based MacOS X on a native PowerPC implementation of BSD, but by choosing Mach, they have kept a door open for porting it to other processor architectures very easily.
As such, MacOS X is a true Unix - as much as anything can be called a true Unix - especially when compared to Linux. The choice of whether to use a microkernel or not is a small implementation issue - it is a question of form, not of function. Microkernels are fairly new (in real world systems, that is - not in academia) and so you will find people deriding them - some will have valid reasons, but most will simply be the same kind of people who tell you that in their day thay had to walk two miles in the snow ... uphill ... both ways ... just to get to school.
I personally do not know the exact merits and demerits of microkernels vs. monolithic kernels, but I would guess that microkernels need more overhead to manage the little sub-kernel tasks, but are more flexible and more platform- and implementation- independent. For those familiar with programming paradigms - I somehow see it as being very similar to the procedural vs. object oriented issue. That's right - one big messy glob of goo vs. lots of little discrete self-contained drops that sometimes come together to exchange their contents!
Hope this helps (or at least spurs someone to write something in reply that helps).
P.S. Linux Torvalds (original creator and current manager of the Linux kernel) is a staunch supporter of monolithic kernels over microkernels