Aros/Developer/Docs/HIDD/Graphics

From Wikibooks, open books for an open world
Jump to navigation Jump to search
Navbar for the Aros wikibook
Aros User
Aros User Docs
Aros User FAQs
Aros User Applications
Aros User DOS Shell
Aros/User/AmigaLegacy
Aros Dev Docs
Aros Developer Docs
Porting Software from AmigaOS/SDL
For Zune Beginners
Zune .MUI Classes
For SDL Beginners
Aros Developer BuildSystem
Specific platforms
Aros x86 Complete System HCL
Aros x86 Audio/Video Support
Aros x86 Network Support
Aros Intel AMD x86 Installing
Aros Storage Support IDE SATA etc
Aros Poseidon USB Support
x86-64 Support
Motorola 68k Amiga Support
Linux and FreeBSD Support
Windows Mingw and MacOSX Support
Android Support
Arm Raspberry Pi Support
PPC Power Architecture
misc
Aros Public License

Introduction[edit | edit source]

All display driver operations are moved to monitorclass. This should do for a good abstraction layer. To finalize this up, let's add that monitorclass will be the only place where Intuition will talk to graphics drivers. All other functions will use monitorclass instead of direct driver calls. Currently this is only OpenScreen(). The only exception will be pointerclass, where it installs a colormap in sprite's bitmap. BTW, in fact we can remove this too, we'll need to add one more tag to AllocSpriteDataA(), something like SPRITEA_Palette.

1. Specifying gfx= and lib= on GRUB command line still works. It will work as long as you keep DEVS:Monitors/Default file in place.

2. You may load additional drivers if you want to. For newstyle drivers, just move the driver from Storage/Monitors to Devs/Monitors. Currently only SDL driver is newstyle. You may also use your old drivers (NVidia, Radeon, VMWare). In order to do this, make a copy of Storage/Monitors/Wrapper file in Devs/Monitors (name does not matter, you may name copies like "NVidia", "ATI" - whatever you like) and edit icon's tooltypes (specify library and HIDD name there). Note that you can't have multiple instances of the same oldstyle driver - they are simply not designed with this in mind.

3. Don't attempt to do bizarre things like instantiating VGA and VESA at the same time via Wrapper. These drivers are special, they can manage only single display, and they can't coexist, even in theory. This is the way how hardware works, it's not some software issue. With time they will be rewritten and be more robust and resistant to such raping. :)

4. For hosted: you may create several GDI displays (on Windows), just edit GDI icon's tooltype. SDL doesn't allow this by design (again, SDL design, not driver design). X11 could allow this but it's too ancient. A volunteer is wanted to fix it. I can explain how.

5. You may add display drivers in runtime. Just doubleclick its file and that's it! However note that doing this while you're running VGA or VESA mode MAY cause problems (if new driver tries to steal hardware from VGA driver which is displaying your screen at the moment). Just reboot and it will be okay. Proper boot mode shutdown is a tricky part, it's a bit underdesigned at the moment. Consider it a WIP. If you run a driver for different card, there will be no problems. If you are already running native driver - again, no problems, native mode drivers perfectly live together (for example NVidia + ATI).

FindName("hidd.graphics.gc") can be handled similar to attribute IDs. Method IDs are also numbered in sequence. It's enough to know base ID, which is the ID of the first method, and then add mo_... offsets to it.

Move initialization of the monitor drivers to C:LoadMonDrvs. NO!!!!!!! There is a rationale for it being where it is now. Loading display drivers implies unloading boot-mode drivers. Unloading a driver is technically possible only when there's no screens at all. Consequently, executing C:LoadMonDrvs from within a shell booted with no startup-sequence, or even with auto-opened shell because of some error, will cause crash. IMHO it's quite not user-friendly.

> Although I believe there are (still?) some technical reasons on the PC architecture that would require AROS to make sure the monitor drivers are loaded before Intuition is up. It's not PC architecture limitation. It's AROS limitation. If a driver has some bitmaps, it can't be unloaded. Theoretically it's possible to implement safe unload by force-closing all windows and screens, however it would be difficult and perhaps require excessive amount of code. So, please, keep this where it is. And add a 'Do not load display drivers' checkbox to 'display options' in bootmenu.

Attributes[edit | edit source]

Currently every HIDD bitmap in AROS has Hidd_BitMap_GfxHidd attribute telling which driver was used for creating it. Normally, all operations involving this bitmap should be performed using this driver. For example, if i'm going to create a GC for some drawing, I am supposed to call HIDD_Gfx_NewGC() on bitmap's driver.

But what to do if the operation involves two bitmaps? For example CopyBox()? On which driver should i call it? Here can be at least three cases:


  • both bitmaps belong to the same driver.
  • is simple and clear case.
  • one bitmap is a memory bitmap and another one is display driver's bitmap.

is a little bit more tricky. What driver should i call? Memory driver or card driver? Memory driver is relatively slow but can work with anything. What about card driver? There is some chance that it can accelerate the operation using DMA. However, there can be some restriction on bitmap (alignment for example), that has to be taken into account when creating memory bitmap. Because only card driver knows about these restrictions, we should have called HIDD_Gfx_NewBitMap() on its driver in order to create a memory bitmap. However we may now know which driver to use during bitmap creation. Well, possible solution is to query friend bitmap. If there's no friend - sorry, no luck. This means that we always perform CopyBox() on card's driver. Card driver is expected to examine a memory bitmap, and accelerate the operation if possible (it's in right place and has right format). Otherwise, call superclass.

both  bitmaps are display bitmaps but belong to different drivers (different displays on different GFX cards).

is the most problematic case. Which driver to call? Is it possible even in theory that two cards may perform DMA data transmission from one to another? Or two independent outputs of the same card (from machine's point of view they are two different devices AFAIK) ?

The next Big Question is what to do with two object pools One of them keeps GCs and another keeps temporary planar bitmap objects. From the first point of view, they just should be global. However only from the first... If we go deeper, GC class may be subclassed (noone currently does this, this is why this still works). Currently graphics.library just creates objects of CLID_HIDD_GC class, and taking subclasses into account this is wrong. GCs should be created using HIDD_Gfx_NewGC() on the driver which is going to use them. This means that GC cache should stay driver-specific.

As to planar bitmaps, the situation is much more complex. Above we came to a conclusion that memory bitmaps may be friends of VRAM bitmaps, if they meet specific requirements. In such a case card driver may use hardware acceleration when working with them. However, this means that we can't take arbitrary empty object from the cache and attach it to an arbitrary bitmap. The bitmap was created without anything in mind, so an attempt to attach an object with GfxHidd attribute pointing to a GFX driver is a nonsense. It likely will not work! Even if we have separate pools per driver, this won't help, because temporary objects are not associated with theit bitmaps in any way. A pool becomes a cesspool... :)

Yes, we could always create planar bitmaps using memory driver (object of CLID_HIDD_Gfx class). However in this case we should be ready that working with planar bitmaps is always slow because it's always performed using CPU.

1. Neither nvidia nor ati hidds (nor any other) are allowed to poke the private planarbm data of gfx.hidd directly. 2. Planar bitmap for given font is created *once* and used many times 3. Both nvidia and ati hidds may store there the planar data in a form which is best optimised for font drawing.

graphics.library does not poke this bitmap directly, therefore nvidia and ati hidds have their own planar bitmaps where they *are allowed to* poke directly. Have a look at BlitColorExpansion methods defined there.

Pixel Format[edit | edit source]

vHidd_StdPixFmt_ARGB32 instead of vHidd_StdPixFmt_Native32 in the HIDD_BM_GetImage() and HIDD_BM_PutImage() calls.

Cursors[edit | edit source]

This is really strange behavior. One question first: do you set aHidd_Gfx_SupportsHWCursor to TRUE? If not, you get undefined behavior because fakegfx.hidd plugs in, and it relies on framebuffer existence.

Screen Drag[edit | edit source]

You may implement screen drag. Note two attributes:

aHidd_BitMap_LeftEdge

and

aHidd_BitMap_TopEdge.

ScrollVPort() just sets them. Do whatever you need in the code. They should be also gettable (superclass always returns 0). This is used in order to provide an ability for the driver to fix up invalid offsets. After setting these attributes ScrollVPort() gets them back, and only after this they go into ViewPort. This way you may limit scrolling.

Offset is counted from physical screen corner to bitmap corner (i.e. it's negative if you're panning around the bitmap which is larger than the screen).

In order to tell Intuition that screens may be bigger or smaller than the display, there are additional sync attributes: Sync_VMin, Sync_VMax, Sync_HMin, Sync_HMax. By default they equal to HDisp and VDisp respectively, but this will be overridden if you explicitly set them.

Framebuffer[edit | edit source]

First, i can explain why it works this way when you call DoSuperMethod(). Look at Show() implementation in the base class. It will return NULL if framebuffer is not present. This means that SDD(GfxBase)->frontbm (see rom/graphics/grtaphics_driver.c, SDD(GfxBase)->driver_LoadView()) will always stay NULL. There's a comparison before so that Show() will never be called twice on the same bitmap.

I guess you need to add some debug output to driver_LoadView() in order to see what happens and why. Remember that things are expected to work correctly when you always return msg->bitMap in your Show().

Some explanation of what is actually done in driver_LoadView():

When system uses framebuffer, there's only one framebuffer bitmap which is really displayable and it is always on display. graphics.hidd baseclass in its Show() simply copies the bitmap to show into a framebuffer, and returns framebuffer pointer. Then the code in driver_LoadView() takes struct BitMap which is being shown and swaps objects in it. Original object is saved in SDD(GfxBase)->bm_bak. Two other saved things are color model number and colormap object. Returning original pointer from Show() makes this code to actually do nothing (objects are swapped with themselves).

The following methods and attributes have been added to our graphics classes:

1. HIDD_Gfx_GetGamma() and HIDD_Gfx_SetGamma() - explained in the autodoc. They are designed in such a way that it's possible to implement MorphOS API on top of it (if we want it).

2. Changing display frequencies. In order to make them changeable you'll have to: - Supply aHidd_Sync_Variable = TRUE to your sync objects. - Your sync objects should contain complete data (aHidd_Sync_HSyncStart, aHidd_Sync_HSyncEnd, aHidd_Sync_VSyncStart and aHidd_Sync_VSyncEnd should not be zeros).

Implement  HIDD_Gfx_SetMode()  method  in your graphics driver.

This method now receives a pointer to a modified sync object. You're not switching screens, so you keep current bitmap. Just you get notified that this sync changed. A driver is expected to check if the given sync is on display at the moment and rebuild the display if needed.

Show gets called with bitmap
From bitmap I extract ModeID
Using ModeID I get Sync from moHidd_Gfx_GetMode
This sync is stored

Should I compare the passed Sync with the one I stored during Show? If so, how should the comparison be done - by checking pointers (it's the same object) or by checking if aoHidd_Sync_HDisp/aoHidd_Sync_VDisp between two syncs? Also what if I get passed sync that has different hdisp/vdisp (meaning I would need new bitmap) than the current one

In order to try out (2), a MonED program will do. It can be built as Prefs/Monitor. It is not built by default since currently it's very experimental.

P.S. Modification of total_color_clocks in the MonitorSpec result in modification of aHidd_Sync_PixelClock, while aHidd_Sync_HTotal stays constant. I don't know if it's correct, modify this if needed. However, if both parameters need to be changed, perhaps, direct sync object access is needed. Sync object pointer is placed into SpecialMonitor->reserved1, and i think it will stay there forever.

First, now it's completely okay to fetch an object and use OOP calls on it. The respective MonitorSpec will be maintained automatically.

Second, sync object became really smart, and it does not allow to modify things that shouldn't be modified (like VDisp/HDisp, Variable, Description, and so on).

Third, now you'll get a notification if something really changed (one of timing attributes was supplied to the OOP_SetAttrs().

Fourth, do_monitor() still there and it still works. However i changed it to modify HTotal instead of PixelClock. I guess it's close to the original behavior. I agree that it doesn't fit well for our video hardware, so you may feel free to write your own display mode editor from scratch. :)

An associated sync object may be found in MonitorSpec->ms_Object (this is an aliased field of never used DisplayInfoDatabase list). You may simply get this object, and use OOP_GetAttr() and OOP_SetAttrs() on it. Your graphics driver will be notified by the sync object automatically, you don't need to call SetMode() yourself.

Overlays[edit | edit source]

HIDD_Gfx_NewOverlay() just reserves memory for the overlay. Reserving a memory does not have something to do with current screenmode anyway. Next the overlay needs to be brought on display. I suggest there will be some HIDD_Overlay_Show() method.

Since the overlay object is created by graphics driver object, the driver can pass a reference to whatever data is needed using private attributes of an overlay class implementation.

The driver prepares scaling according to the given rectangle and resolution of currently visible display.

This is also covered by passing data from driver object to overlay object.

I would leave this to the driver. Many cards support *any* key color, but assuming validity of this statement for all graphics chips would be a wrong decision. Let graphics driver decide itself.

Hmmm... I see one big quirk here... As i can understand, color keying is used to enable depth-arranging windows with attached overlays. Overlay's area is filled with key color and overlay is visible only through this color.

But let's imagine a situation...

+------------------+
|                  |
|  video window    |
|          +--------------+
|          |     some     |
+----------|     fancy    |
           |    painting  |
           +--------------+

If key color happens to be present in "some fancy painting", the video will be visible there instead of it, in the part where it covers video window. What to do? How will i know that key color will never be found on the desktop? It doesn't matter who generated the color. Second window can be even not opened at the moment of color generation.

Don't forget to implement SwapVLayerBuffer() and VOA_DoubleBuffer/VOA_MultiBuffer tags, by the way. Having a triple-buffered overlay is much nicer with mplayer and its subtitles in particular. :)

Regarding depth, it follows the intuition window, but to achieve that, you need to set VOA_ColorKey tag at vlayer creation (which will fill the intuition with some particular backfill colour by default i think). With that colorkey, cgxvideo knows where to display overlay video and where not in a window. So if a window is partially or totally covered, the vlayer will be handled accordingly, following depth arrangement.

Color keying was going to be the next question. So do i understand correctly that:

1. If you specify VOA_ColorKey to TRUE the system will pick up come color for keying and fill overlay area in the window with this color. 2. At any moment i can obtain color value and pen number using GetVLayerAttr() with VOA_ColorKey and VOA_ColorKeyPen. 3. After this video overlay will be clipped accordingly to the part of display filled by the key color.

Should the driver provide key color value (because hardware may impose some limitations on it) or the library should pick it up and provide to the driver.

if there can be (in theory) several overlays on one display ? More than one real overlay sounds very unlikely for less than two graphic cards - actually with cgxvideo the API would allow that, however this never was implemented under AOS (at least on publicly available cards).

Normally an application would expect to get the single, fast overlay - if there is more than one, and all the additional ones are slower, this raises the question why not better directly blit to the bitmap in such cases.

Consequently, if more than one overlay is available - with some being emulated, and possibly slowly emulated - there also should be a way for applications to determine whether the currently locked overlay is a real one or not. So they can decide among this and other available options (e.g. using their own AltiVec code and directly blit).

It depends on the hardware and, actually, on the driver. Video overlay can be done in two ways:

1. As a real overlay supported by the gfx card. Here, one, two or maybe three overlay objects are allowed to coexist at the same time (one overlay is the most common case, though) 2. Overlay could be a 2D surface blitted by the 3D engine. Here, the amount of available overlays is limited only by available memory.

Implement a video overlay class similar to bitmap class. This would let to create several overlays (if some hardware ever supports this). BTW, may be bitmap and overlay do not differ? Does not make sense to have the same set of methods for both bitmap and overlay? In this case we would have to extend pixelformat class to accommodate new formats (like YUV or YPbPr). Usually one do not use any methods like line/rect drawing on overlay object. Accessing the YUV data directly is much more likely. Probably you checked it anyway, however regarding cgxvideo the following is important to keep in mind:

SRCFMT_YUV16
(not recommended, use YCbCr16 instead)
SRCFMT_YCbCr16
SRCFMT_RGB15PC
SRCFMT_RGB16PC

I.e. four formats to be supported and YUV not being the recommended one.

YcbCr16 works fine and is obviously better suited than RGB for an application like mplayer that provides YUV data. But your cgxvideo doc is a bit outdated. There's also an additional pixel format since a few years: SRCFMT_YCbCr420 (YV420, planar overlay), which is quite interesting when the gfx card supports it, since it only needs 12bits per pixel, instead of 16, giving a nice speedup at copy.

ScreenDepth()[edit | edit source]

Screendepth() always calls RethinkDisplay(), even when screen order is not changed. If you look at ScreenDepth() code you'll see checks if the screen is not already first/last. You can try moving RethinkDisplay() into these conditions.

On the other hand, originally MrgCop() does not clobber the screen on already active view AFAIK. So, the actual problem is likely incorrect implementation in your driver.

I suggested the following model: every bitmap contains two states: actual and pending one. Pending state is recalculated during PrepareViewPorts, and during ShowViewPorts pending state becomes active one. Actual hardware update happens only if new state is different from old one. Yes, ShowViewPorts immediately folow PrepareViewPorts on active view.

Introduction To Many Monitors/Cards[edit | edit source]

Uses different driver instances would be easier than introducing some new concept like units. Additionally it fits OOP paradigm perfectly.

Driver = class and instance = object. Several outputs on the same card = several similar cards = several objects of one class.

First, DEVS:Monitors and SYS:Storage/Monitors directories appeared. Currently Storage/Monitors is empty (except on hosted ports) and DEVS:Monitors contains only single Wrapper file. Now display drivers may come in a new form - as plain executable which is put into DEVS:Monitors directory. An example of building such a driver is overhauled SDL driver in arch/all-hosted/hidd/sdl. Note startup.c file and comments in it.

In order to write a boot-time driver (which can be used to boot up the system), a simple struct Resident needs to be added to the driver, which will call its startup code. There's no need to build a library or something like it.

A function of Wrapper program is to load a display driver which is specified in old way (in kernel command line or S:hidd.prefs file). Note that it completely ignores input drivers, they are now completely separated. This allows to use old display drivers until they are rewritten.

I strongly suggest display drivers authors to convert their code to a new form. After some time support for S:hidd.prefs and command line arguments will be completely removed.

Display driver startup code should do the following:

1. Open needed libraries, obtain OOP attribute bases, create OOP classes. Classes do not have to be public, nobody will refer to them by names.

2. Scan available hardware (PCI bus, OF tree, whatever) and create gfx driver objects for every supported device.

3. Call graphics.library/AddDisplayDriver() on every created object. The functions returns ZERO on SUCCESS and NONZERO on failure. It's completely up to driver's author what to do in case of double-start (if the user double-clicks on your driver icon, just for fun  :)). Normally it's expected that the driver should not re-add hardware that is already used. Our current PCI subsystem doesn't support devices ownership, perhaps this needs to be changed. You are free to invent absolutely anything. For example, SDL driver does this by registering its OOP class as public, and trying to find this class upon every startup. If the class is already present, we are already running.

Note that currently AddDisplayDriver() is A HACK! It shuts down and destroys previously installed display driver, and removes it from the database. So for now do not create more than one object. Implement some #define in your code for this. When graphics.library completely supports multi-display environment, this #define will need to be removed. The only thing a client knows is that "We have a display and here is its instance". AddDisplayDriver() function does this. It inserts the driver into database and assigns monitor ID (upper half of mode ID) to it. That's all. After this it's up to the user on which display to open his screens. All displays work in similar manner.

Users just need to move a display driver from Storage/Monitors to DEVS:Monitors in order to activate it. It will automatically be installed in the system if it finds the hardware. If there's no usable hardware, it will do nothing.

Display drivers may have icons. Loader code looks at STARTPRI tooltype and uses its value as a priority level. This is done in order to make display mode ID assignment more predictable in future (IDs are assigned in the order in which drivers are loaded).

Note that SDL driver can't be loaded from S:hidd.prefs any more, because it's not a library any more! Just place it into DEVS:Monitors and reboot, the rest will be done automatically.

Note that CURRENTLY display drivers can't be added on the fly. Attempting to do this will give unpredictable results. This is graphics.library problem, the support is incomplete. So, for now, do not attempt to run display drivers by hands! Especially Wrapper which currently can't have doublestart protection by design (well, it can, but i don't want to bother about the code which will go away after some time).

Part 2. Input subsystem.

In order to transparently deal with (possibly multiple) input sources two new HIDDs were designed: keyboard.hidd and mouse.hidd. They act in the same way as PCI "master class" does. They are responsible for communication between drivers and clients.

In order to start talking to an input subsystem, the program (or driver) needs to create an instance of CLID_Hidd_Kbd (for keyboard) or CLID_Hidd_Mouse (for mouse). The program may provide callback function during object creation, in this case the program will act as a client and will receive input events. The callback is executed in interrupt context, the same as before. Currently the only clients are keyboard.device and gameport.device. lowlevel.library is welcome to join them. :)

In order to register an input driver, AddHardwareDriver() method has to be called on the "master" object. A parameters for this call are driver class pointer (not ID!) and optional additional taglist. You don't supply pre-made objects because master class needs to provide own callback function for every driver.

A driver can be registered at any moment, without any restrictions.

Input streams from multiple drivers are just merged into a single stream.

AddHardwareDriver() returns a pointer to driver's instance. At any moment it can be removed using RemHardwareDriver(). There's no need to call OOP_DisposeObject() on it since RemHardwareDriver() does it by itself.

!!! I need help !!!

Some cleanup needs to be done for other architectures. A startup code needs to be added to PC-native mouse and keyboard drivers. They need to learn to register themselves. Currently they are registered by dosboot resident using backwards compatibility kludge code relying on oldstyle struct BootConfig. I know, Michal hates it, and it really needs to be removed (currently X11 and PC native input rely on this, i think i'll update X11 myself soon). Some other tasks in this field can be:

1. Implement boot options and display options screens in the bootmenu. A boot options screen may present a similar choice to original AmigaOS. Display options screen can contain at least one checkmark which disables loading display drivers (this is why i decided to put the loader into dosboot and not into Startup-sequence). Perhaps it even could do some advanced things like listing available drivers and allowing to disable some of them (doable with more dosboot overhaul). 2. Separate serial mouse driver from PS/2 mouse driver. The new input subsystem allows them to coexist, and even co-work in the same time. 3. Implement more functions in lowlevel.library (since currently there can be more than one input event listener). 4. Draw icons for Monitors drawer and display drivers. :)