Age | Commit message (Collapse) | Author |
|
|
|
Fixes performance drop after suspend/resume on some systems.
|
|
This will be used by the X Server for VT switch.
|
|
struct_mutex cannot be held while blocking on DRM lock.
|
|
|
|
|
|
|
|
Without kernel modesetting, this requires cooperation of the userspace
modesetting driver. We may have to leave the vblank interrupt enabled otherwise
to avoid problems.
|
|
Only compensate when the driver counter actually appears to have moved
backwards.
The compensation deltas need to be incremental instead of absolute; drop the
vblank_offset field and just use atomic_sub().
|
|
Turns out the radeon driver is affected by the same problem that prompted i915
to revert to less useful counter flipping at the end of the vblank interval. In
the long term, we can hopefully implement more reliable methods to achieve
counter flipping at the beginning of vblank, but otherwise this should be an
acceptable workaround.
|
|
|
|
This reverts commit 6671ad1917698b6174a1af314b63b3800d75248c.
The vblank ioctl needs to update the userspace parameters when interrupted by
a signal, which was prevented by this. Let's see if this breaks other ioctls...
|
|
|
|
This should be pci_map_page not pci_map_single
|
|
Just renaming this function and related parameters to match terminology used
elsewhere in the driver.
|
|
set_domain can block waiting for rendering to complete. If that process is
interrupted by a signal, it can return -EINTR. Catch this error in all
callers and correctly deal with the result.
|
|
|
|
|
|
|
|
I needed to re-arrange some functions for this.
Also needed to call DRM_SPINUNINIT on the vbl_lock during cleanup.
|
|
|
|
Conflicts:
linux-core/Makefile.kernel
shared-core/i915_drv.h
shared-core/nouveau_state.c
|
|
The problem was revealed where on 965, the display list vertex buffer would see:
create -> (CPU, CPU)
set_domain (CPU, CPU) -> (CPU, CPU)
set_comain (CPU, 0) -> (CPU, 0) (no clflush occurred)
execbuf (GPU, 0) -> (CPU+GPU, 0) (still no clflush)
instead of:
create -> (CPU, CPU)
set_domain (CPU, CPU) -> (CPU, CPU)
set_comain (CPU, 0) -> (CPU, CPU)
execbuf (GPU, 0) -> (CPU+GPU, 0) (clflushed)
|
|
|
|
|
|
Otherwise, 965 constant state buffers get re-relocated every exec. Ouch.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Also remove an unreachable unlock.
|
|
|
|
|
|
|
|
|
|
The i915 driver now works again.
|
|
|
|
Object domain transfer can involve adding flush ops to the request queue,
and so the DRM lock must be held to avoid having the X server smash pointers
badly.
|
|
The interrupt enable register cannot be used to temporarily disable
interrupts, instead use the interrupt mask register.
Note that this change means that a pile of buffers will be left stuck on the
chip as the final interrupts will not be recognized to come and drain things.
|
|
This new ioctl returns whether re-using the buffer would force a wait.
|
|
i915_gem_flush_pwrite optimizes short writes to the buffer by clflushing
only the modified pages, but it was miscomputing the number of pages.
|
|
|
|
When reading from multiple domains, allow each cache to continue
to hold data until writes occur somewhere. This is done by
first leaving the read_domains alone at bind time (presumably the CPU read
cache contains valid data still) and then in set_domain, if no write_domain
is specified, the new read domains are simply merged into the existing read
domains.
A huge comment was added above set_domain to explain how things are
expected to work.
|
|
Newly allocated objects need to be in the CPU domain as they've just been
cleared by the CPU. Also, unmapping objects from the GTT needs to put them
into the CPU domain, both to flush rendering as well as to ensure that any
paging action gets flushed before we remap to the GTT.
|
|
Commands in the ring are parsed and started when the head pointer passes by
them, but they are not necessarily finished until a MI_FLUSH happens. This
patch inserts a flush after the execbuffer (the only place a flush wasn't
already happening).
|
|
Recording the tail pointer in a local variable improves performance, but if
someone messes up and fails to reload at the right time, the driver will
write commands to the wrong part of the ring and scramble execution badly.
This change (available by setting I915_RING_VALIDATE to 1) checks to make
sure the cached tail pointer matches the hardware tail pointer at each ring
buffer addition, calling BUG_ON when that's not true.
|
|
Ring locals must be reloaded from hardware in case the X server ran.
|
|
There are now 3 lists. Active is buffers currently in the ringbuffer.
Flushing is not in the ringbuffer, but needs a flush before unbinding.
Inactive is as before. This prevents object_free → unbind →
wait_rendering → object_reference and a kernel oops about weird refcounting.
This also avoids an synchronous extra flush and wait when freeing a buffer
which had a write_domain set (such as a temporary rendered to and then from
using the 2d engine). It will sit around on the flushing list until the
appropriate flush gets emitted, or we need the GTT space for another
operation.
|