summaryrefslogtreecommitdiff
path: root/shared-core
AgeCommit message (Collapse)Author
2007-10-25Tighten permissions on some buffer manager ioctls.Thomas Hellstrom
Set bo init minor to 0. Add the version function to header.
2007-10-25Buffer manager:Thomas Hellstrom
Implement a version check IOCTL for drivers that don't use drmMMInit from user-space. Remove the minor check from the kernel code. That's really up to the driver. Bump major.
2007-10-25Merge branch 'master' into drm-ttm-finalizeThomas Hellstrom
2007-10-25i915: relocate buffers before validation add memory barrier between twoDave Airlie
2007-10-25i915: remove relocatee kernel mapping sooner stops mutex taking during sleepDave Airlie
2007-10-24Fix missing \n on some DRM_ERROR in i915_dma.cEric Anholt
2007-10-24i915: use a drm memory barrier defineDave Airlie
2007-10-23Merge branch 'master' of git+ssh://git.freedesktop.org/git/mesa/drm into ↵Alan Hourihane
modesetting-101
2007-10-23i915: require mfence before submitting batchbufferDave Airlie
2007-10-23nouveau: fix IGPStephane Marchesin
2007-10-22A cmdbuf mutex to implement validate-submit-fence atomicity in the absenceThomas Hellstrom
of a hardware lock.
2007-10-22i915: split reloc execution into separate functionDave Airlie
2007-10-21Adapt i915 super-ioctl for lock-free operation.Thomas Hellstrom
2007-10-21Remove the need for the hardware lock in the buffer manager.Thomas Hellstrom
Add interface entry cleaning a memory type without touching NO_EVICT buffers.
2007-10-20Simple replacement for hardware lock in some cases.Thomas Hellstrom
Fix i915 since last commit.
2007-10-17Remove the op ioctl, and replace it with a setuser ioctl.Thomas Hellstrom
Remove need for lock for now. May create races when we clean memory areas or on takedown. Needs to be fixed. Really do a validate on buffer creation in order to avoid problems with fixed memory buffers.
2007-10-17Revert "Replace NO_MOVE/NO_EVICT flags to buffer objects with an ioctl to ↵Thomas Hellstrom
set pinning." This reverts cf2d569daca6954d11a796f4d110148ae2e0c827 commit.
2007-10-17Revert "Add some more verbosity to drm_bo_set_pin_req comments."Thomas Hellstrom
This reverts e7bfeb3031374653f7e55d67cc1b5c823849359f commit.
2007-10-17Fix a crash on X startupAlan Hourihane
2007-10-17i915: lock struct mutex about buffer object lookupsDave Airlie
2007-10-16Merge branch 'master' of git+ssh://git.freedesktop.org/git/mesa/drm into ↵Alan Hourihane
modesetting-101 Conflicts: linux-core/drm_bo.c linux-core/drm_objects.h shared-core/i915_dma.c shared-core/i915_drv.h
2007-10-16Drop destroy ioctls for fences and buffer objects.Kristian Høgsberg
We now always create a drm_ref_object for user objects and this is then the only things that holds a reference to the user object. This way unreference on will destroy the user object when the last drm_ref_object goes way.
2007-10-16Take bo type argument out of the ioctl interface.Kristian Høgsberg
The buffer object type is still tracked internally, but it is no longer part of the user space visible ioctl interface. If the bo create ioctl specifies a non-NULL buffer address we assume drm_bo_type_user, otherwise drm_bo_type_dc. Kernel side allocations call drm_buffer_object_create() directly and can still specify drm_bo_type_kernel. Not 100% this makes sense either, but with this patch, the buffer type is no longer exported and we can clean up the internals later on.
2007-10-16Eliminate support for fake buffers.[utf-8] Kristian Høgsberg
2007-10-16nouveau: revert unintended change.Ben Skeggs
2007-10-16nouveau: Cleanup PGRAPH handler, attempt to survive PGRAPH exceptions.Ben Skeggs
2007-10-16nouveau: Survive PFIFO_CACHE_ERROR.Ben Skeggs
2007-10-16nouveau: Handle multiple PFIFO exceptions per irq, cleanup output.Ben Skeggs
2007-10-14nouveau: PPC fixes. These regs are very touchy.Stephane Marchesin
2007-10-14nouveau: fix warning.Jeremy Kolb
2007-10-14nouveau: fix warning.Jeremy Kolb
2007-10-14i915: fix vbl_swap allocationDave Airlie
2007-10-12nouveau: Fix a typo in nv25_graph_context_initPekka Paalanen
2007-10-12nouveau: Fix typos in nv20_graph_context_initStuart Bennett
2007-10-12nouveau: Make notifiers go into PCI memoryPekka Paalanen
On some hardware notifers in AGP memory just don't work.
2007-10-12nouveau: mandatory "oops I forgot half of the files" commitArthur Huillet
2007-10-12nouveau: added support for software methods, and implemented those necessary ↵Arthur Huillet
for NV04 (TNT1) to start X
2007-10-12i915: add superioctl support to i915Dave Airlie
This adds the initial i915 superioctl interface. The interface should be sufficent even if the implementation may needs fixes/optimisations internally in the drm wrt caching etc.
2007-10-10nouveau : nv10 and nv04 PGRAPH_NSTATUS are differentMatthieu Castet
2007-10-10nouveau: PMC_BOOT_1 was not mapped.Maarten Maathuis
2007-10-10nouveau: try to fix big endian.Stephane Marchesin
2007-10-07nouveau: A char is signed, so it may overflow for >NV50.Maarten Maathuis
2007-10-06nouveau : print correct value in nouveau_graph_dump_trap_info for nv04Matthieu Castet
2007-10-05Merge branch 'pre-superioctl-branch'Dave Airlie
2007-10-04nouveau: Remove excess device classes.Maarten Maathuis
2007-10-04nouveau: NV47 context switching voodoo + warningMaarten Maathuis
2007-10-04nouveau: Switch over to using PMC_BOOT_0 for card detection.Maarten Maathuis
2007-10-04nouveau: nv2a drm context switch support.Stephane Marchesin
2007-10-02nouveau: nv20 graph_create_context differencePekka Paalanen
nv20 writes the chan->id to a different place than nv28. This still does not make nv20 run nv10_demo.
2007-10-02nouveau: fix nv25_graph_context_initPekka Paalanen
It was writing 4x the data in a loop.
/span> #if defined(CONFIG_X86) && (LINUX_VERSION_CODE < KERNEL_VERSION(2,6,15)) /* * These have bad performance in the AGP module for the indicated kernel versions. */ int drm_map_page_into_agp(struct page *page) { int i; i = change_page_attr(page, 1, PAGE_KERNEL_NOCACHE); /* Caller's responsibility to call global_flush_tlb() for * performance reasons */ return i; } int drm_unmap_page_from_agp(struct page *page) { int i; i = change_page_attr(page, 1, PAGE_KERNEL); /* Caller's responsibility to call global_flush_tlb() for * performance reasons */ return i; } #endif #if (LINUX_VERSION_CODE < KERNEL_VERSION(2,6,19)) /* * The protection map was exported in 2.6.19 */ pgprot_t vm_get_page_prot(unsigned long vm_flags) { #ifdef MODULE static pgprot_t drm_protection_map[16] = { __P000, __P001, __P010, __P011, __P100, __P101, __P110, __P111, __S000, __S001, __S010, __S011, __S100, __S101, __S110, __S111 }; return drm_protection_map[vm_flags & 0x0F]; #else extern pgprot_t protection_map[]; return protection_map[vm_flags & 0x0F]; #endif }; #endif #if (LINUX_VERSION_CODE < KERNEL_VERSION(2,6,15)) /* * vm code for kernels below 2.6.15 in which version a major vm write * occured. This implement a simple straightforward * version similar to what's going to be * in kernel 2.6.19+ * Kernels below 2.6.15 use nopage whereas 2.6.19 and upwards use * nopfn. */ static struct { spinlock_t lock; struct page *dummy_page; atomic_t present; } drm_np_retry = {SPIN_LOCK_UNLOCKED, NOPAGE_OOM, ATOMIC_INIT(0)}; static struct page *drm_bo_vm_fault(struct vm_area_struct *vma, struct fault_data *data); struct page * get_nopage_retry(void) { if (atomic_read(&drm_np_retry.present) == 0) { struct page *page = alloc_page(GFP_KERNEL); if (!page) return NOPAGE_OOM; spin_lock(&drm_np_retry.lock); drm_np_retry.dummy_page = page; atomic_set(&drm_np_retry.present,1); spin_unlock(&drm_np_retry.lock); } get_page(drm_np_retry.dummy_page); return drm_np_retry.dummy_page; } void free_nopage_retry(void) { if (atomic_read(&drm_np_retry.present) == 1) { spin_lock(&drm_np_retry.lock); __free_page(drm_np_retry.dummy_page); drm_np_retry.dummy_page = NULL; atomic_set(&drm_np_retry.present, 0); spin_unlock(&drm_np_retry.lock); } } struct page *drm_bo_vm_nopage(struct vm_area_struct *vma, unsigned long address, int *type) { struct fault_data data; if (type) *type = VM_FAULT_MINOR; data.address = address; data.vma = vma; drm_bo_vm_fault(vma, &data); switch (data.type) { case VM_FAULT_OOM: return NOPAGE_OOM; case VM_FAULT_SIGBUS: return NOPAGE_SIGBUS; default: break; } return NOPAGE_REFAULT; } #endif #if !defined(DRM_FULL_MM_COMPAT) && \ ((LINUX_VERSION_CODE < KERNEL_VERSION(2,6,15)) || \ (LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,19))) static int drm_pte_is_clear(struct vm_area_struct *vma, unsigned long addr) { struct mm_struct *mm = vma->vm_mm; int ret = 1; pte_t *pte; pmd_t *pmd; pud_t *pud; pgd_t *pgd; spin_lock(&mm->page_table_lock); pgd = pgd_offset(mm, addr); if (pgd_none(*pgd)) goto unlock; pud = pud_offset(pgd, addr); if (pud_none(*pud)) goto unlock; pmd = pmd_offset(pud, addr); if (pmd_none(*pmd)) goto unlock; pte = pte_offset_map(pmd, addr); if (!pte) goto unlock; ret = pte_none(*pte); pte_unmap(pte); unlock: spin_unlock(&mm->page_table_lock); return ret; } static int vm_insert_pfn(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn) { int ret; if (!drm_pte_is_clear(vma, addr)) return -EBUSY; ret = io_remap_pfn_range(vma, addr, pfn, PAGE_SIZE, vma->vm_page_prot); return ret; } static struct page *drm_bo_vm_fault(struct vm_area_struct *vma, struct fault_data *data) { unsigned long address = data->address; drm_buffer_object_t *bo = (drm_buffer_object_t *) vma->vm_private_data; unsigned long page_offset; struct page *page = NULL; drm_ttm_t *ttm; drm_device_t *dev; unsigned long pfn; int err; unsigned long bus_base; unsigned long bus_offset; unsigned long bus_size; mutex_lock(&bo->mutex); err = drm_bo_wait(bo, 0, 1, 0); if (err) { data->type = (err == -EAGAIN) ? VM_FAULT_MINOR : VM_FAULT_SIGBUS; goto out_unlock; } /* * If buffer happens to be in a non-mappable location, * move it to a mappable. */ if (!(bo->mem.flags & DRM_BO_FLAG_MAPPABLE)) { unsigned long _end = jiffies + 3*DRM_HZ; uint32_t new_mask = bo->mem.mask | DRM_BO_FLAG_MAPPABLE | DRM_BO_FLAG_FORCE_MAPPABLE; do { err = drm_bo_move_buffer(bo, new_mask, 0, 0); } while((err == -EAGAIN) && !time_after_eq(jiffies, _end)); if (err) { DRM_ERROR("Timeout moving buffer to mappable location.\n"); data->type = VM_FAULT_SIGBUS; goto out_unlock; } } if (address > vma->vm_end) { data->type = VM_FAULT_SIGBUS; goto out_unlock; } dev = bo->dev; err = drm_bo_pci_offset(dev, &bo->mem, &bus_base, &bus_offset, &bus_size); if (err) { data->type = VM_FAULT_SIGBUS; goto out_unlock; } page_offset = (address - vma->vm_start) >> PAGE_SHIFT; if (bus_size) { drm_mem_type_manager_t *man = &dev->bm.man[bo->mem.mem_type]; pfn = ((bus_base + bus_offset) >> PAGE_SHIFT) + page_offset; vma->vm_page_prot = drm_io_prot(man->drm_bus_maptype, vma); } else { ttm = bo->ttm; drm_ttm_fixup_caching(ttm); page = drm_ttm_get_page(ttm, page_offset); if (!page) { data->type = VM_FAULT_OOM; goto out_unlock; } pfn = page_to_pfn(page); vma->vm_page_prot = vm_get_page_prot(vma->vm_flags); } err = vm_insert_pfn(vma, address, pfn); if (!err || err == -EBUSY) data->type = VM_FAULT_MINOR; else data->type = VM_FAULT_OOM; out_unlock: mutex_unlock(&bo->mutex); return NULL; } #endif #if (LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,19)) && \ !defined(DRM_FULL_MM_COMPAT) /** */ unsigned long drm_bo_vm_nopfn(struct vm_area_struct * vma, unsigned long address) { struct fault_data data; data.address = address; (void) drm_bo_vm_fault(vma, &data); if (data.type == VM_FAULT_OOM) return NOPFN_OOM; else if (data.type == VM_FAULT_SIGBUS) return NOPFN_SIGBUS; /* * pfn already set. */ return 0; } #endif #ifdef DRM_ODD_MM_COMPAT /* * VM compatibility code for 2.6.15-2.6.18. This code implements a complicated * workaround for a single BUG statement in do_no_page in these versions. The * tricky thing is that we need to take the mmap_sem in exclusive mode for _all_ * vmas mapping the ttm, before dev->struct_mutex is taken. The way we do this is to * check first take the dev->struct_mutex, and then trylock all mmap_sems. If this * fails for a single mmap_sem, we have to release all sems and the dev->struct_mutex, * release the cpu and retry. We also need to keep track of all vmas mapping the ttm. * phew. */ typedef struct p_mm_entry { struct list_head head; struct mm_struct *mm; atomic_t refcount; int locked; } p_mm_entry_t; typedef struct vma_entry { struct list_head head; struct vm_area_struct *vma; } vma_entry_t; struct page *drm_bo_vm_nopage(struct vm_area_struct *vma, unsigned long address, int *type) { drm_buffer_object_t *bo = (drm_buffer_object_t *) vma->vm_private_data; unsigned long page_offset; struct page *page; drm_ttm_t *ttm; drm_device_t *dev; mutex_lock(&bo->mutex); if (type) *type = VM_FAULT_MINOR; if (address > vma->vm_end) { page = NOPAGE_SIGBUS; goto out_unlock; } dev = bo->dev; if (drm_mem_reg_is_pci(dev, &bo->mem)) { DRM_ERROR("Invalid compat nopage.\n"); page = NOPAGE_SIGBUS; goto out_unlock; } ttm = bo->ttm; drm_ttm_fixup_caching(ttm); page_offset = (address - vma->vm_start) >> PAGE_SHIFT; page = drm_ttm_get_page(ttm, page_offset); if (!page) { page = NOPAGE_OOM; goto out_unlock; } get_page(page); out_unlock: mutex_unlock(&bo->mutex); return page; } int drm_bo_map_bound(struct vm_area_struct *vma) { drm_buffer_object_t *bo = (drm_buffer_object_t *)vma->vm_private_data; int ret = 0; unsigned long bus_base; unsigned long bus_offset; unsigned long bus_size; ret = drm_bo_pci_offset(bo->dev, &bo->mem, &bus_base, &bus_offset, &bus_size); BUG_ON(ret); if (bus_size) { drm_mem_type_manager_t *man = &bo->dev->bm.man[bo->mem.mem_type]; unsigned long pfn = (bus_base + bus_offset) >> PAGE_SHIFT; pgprot_t pgprot = drm_io_prot(man->drm_bus_maptype, vma); ret = io_remap_pfn_range(vma, vma->vm_start, pfn, vma->vm_end - vma->vm_start, pgprot); } return ret; } int drm_bo_add_vma(drm_buffer_object_t * bo, struct vm_area_struct *vma) { p_mm_entry_t *entry, *n_entry; vma_entry_t *v_entry; struct mm_struct *mm = vma->vm_mm; v_entry = drm_ctl_alloc(sizeof(*v_entry), DRM_MEM_BUFOBJ); if (!v_entry) { DRM_ERROR("Allocation of vma pointer entry failed\n"); return -ENOMEM; } v_entry->vma = vma; list_add_tail(&v_entry->head, &bo->vma_list); list_for_each_entry(entry, &bo->p_mm_list, head) { if (mm == entry->mm) { atomic_inc(&entry->refcount); return 0; } else if ((unsigned long)mm < (unsigned long)entry->mm) ; } n_entry = drm_ctl_alloc(sizeof(*n_entry), DRM_MEM_BUFOBJ); if (!n_entry) { DRM_ERROR("Allocation of process mm pointer entry failed\n"); return -ENOMEM; } INIT_LIST_HEAD(&n_entry->head); n_entry->mm = mm; n_entry->locked = 0; atomic_set(&n_entry->refcount, 0); list_add_tail(&n_entry->head, &entry->head); return 0; } void drm_bo_delete_vma(drm_buffer_object_t * bo, struct vm_area_struct *vma) { p_mm_entry_t *entry, *n; vma_entry_t *v_entry, *v_n; int found = 0; struct mm_struct *mm = vma->vm_mm; list_for_each_entry_safe(v_entry, v_n, &bo->vma_list, head) { if (v_entry->vma == vma) { found = 1; list_del(&v_entry->head); drm_ctl_free(v_entry, sizeof(*v_entry), DRM_MEM_BUFOBJ); break; } } BUG_ON(!found); list_for_each_entry_safe(entry, n, &bo->p_mm_list, head) {