dma-buf-mgr handles the case of reserving single or multiple dma-bufswhile trying to prevent deadlocks from buffers being reservedsimultaneously. For this to happen extra functions have been introduced:

+ dma_buf_reserve() + dma_buf_unreserve() + dma_buf_wait_unreserved()

Reserve a single buffer, optionally with a sequence to indicate thisis part of a multi-dmabuf reservation. This function will return-EDEADLK and return immediately if reserving would cause a deadlock.In case a single buffer is being reserved, no sequence is needed,otherwise please use the dmabufmgr calls.

If you want to attach a exclusive dma-fence, you have to waituntil all shared fences have signalled completion. If there are none,or if a shared fence has to be attached, wait until last exclusivefence has signalled completion.

The new fence has to be attached before unreserving the buffer,and in exclusive mode all previous fences will have be removedfrom the buffer, and unreffed when done with it.

dmabufmgr methods:

+ dmabufmgr_validate_init()This function inits a dmabufmgr_validate structure and appendsit to the tail of the list, with refcount set to 1. + dmabufmgr_validate_put()Convenience function to unref and free a dmabufmgr_validatestructure. However if it's used for custom callback signalling,a custom function should be implemented.

+ dmabufmgr_reserve_buffers()This function takes a linked list of dmabufmgr_validate's, each onerequires the following members to be set by the caller:- validate->head, list head- validate->bo, must be set to the dma-buf to reserve.- validate->shared, set to true if opened in shared mode.- validate->priv, can be used by the caller to identify this buffer.

This function will then set the following members on succesful completion:

- validate->num_fences, amount of valid fences to wait on before this buffer can be accessed. This can be 0.- validate->fences[0...num_fences-1] fences to wait on

+ dmabufmgr_backoff_reservation()This can be used when the caller encounters an error between reservationand usage. No new fence will be attached and all reservations will beundone without side effects.

+ dmabufmgr_fence_buffer_objectsUpon successful completion a new fence will have to be attached.This function releases old fences and attaches the new one.

The rationale of refcounting dmabufmgr_validate lies in the waitdma_fence_cb wait member. Before calling dma_fence_add_callbackyou should increase the refcount on dmabufmgr_validate withdmabufmgr_validate_get, and on signal completion you should callkref_put(&val->refcount, custom_free_signal); after all callbackshave added you drop the refcount by 1 also, when refcount drops to0 all callbacks have been signalled, and dmabufmgr_validatehas been waited on and can be freed. Since this will requireatomic spinlocks to unlink the list and signal completion, adeadlock could occur if you try to call add_callback otherwise,so the refcount is used as a means of preventing this fromoccuring by having your custom free function take a device specificlock, removing from list and freeing the data. The nice/evil partabout this is that this will also guarantee no memory leaks can occurbehind your back. This allows delays completion by moving thedmabufmgr_validate list to be a part of the committed reservation.

diff --git a/Documentation/DocBook/device-drivers.tmpl b/Documentation/DocBook/device-drivers.tmplindex 36252ac..2fc050c 100644--- a/Documentation/DocBook/device-drivers.tmpl+++ b/Documentation/DocBook/device-drivers.tmpl@@ -128,6 +128,8 @@ X!Edrivers/base/interface.c !Edrivers/base/dma-buf.c !Edrivers/base/dma-fence.c !Iinclude/linux/dma-fence.h+!Edrivers/base/dma-buf-mgr.c+!Iinclude/linux/dma-buf-mgr.h !Edrivers/base/dma-coherent.c !Edrivers/base/dma-mapping.c </sect1>diff --git a/Documentation/dma-buf-synchronization.txt b/Documentation/dma-buf-synchronization.txtnew file mode 100644index 0000000..dd4685e--- /dev/null+++ b/Documentation/dma-buf-synchronization.txt@@ -0,0 +1,197 @@+ DMA Buffer Synchronization API Guide+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~++ Maarten Lankhorst+ <maarten.lankhorst@canonical.com>+ <m.b.lankhorst@gmail.com>++This is a followup to dma-buf-sharing.txt, which should be read first.+Unless you're dealing with the most simplest of cases, you're going to need+synchronization. This is done with the help of dma-fence and dma-buf-mgr.+++dma-fence+---------++dma-fence is simply a synchronization primitive used mostly by dma-buf-mgr.+In general, driver writers would not need to implement their own kind of+dma-fence, but re-use the existing types. The possibility is left open for+platforms which support alternate means of hardware synchronization between+IP blocks to provide their own implementation shared by the drivers on that+platform.++The base dma-fence is sufficient for software based signaling. Ie. when the+signaling driver gets an irq, calls dma_fence_signal() which wakes other+driver(s) that are waiting for the fence to be signaled.++But to support cases where no CPU involvement is required in the buffer+handoff between two devices, different fence implementations can be used. By+comparing the ops pointer with known ops, it is possible to see if the fence+you are waiting on works in a special way known to your driver, and act+differently based upon that. For example dma_seqno_fence allows hardware+waiting until the condition is met:++ (s32)((sync_buf)[seqno_ofs] - seqno) >= 0++But all dma-fences should have a software fallback, for the driver creating+the fence does not know if the driver waiting on the fence supports hardware+signaling. The enable_signaling() callback is to notify the fence+implementation (or possibly the creator of the fence) that some other driver+is waiting for software notification and dma_fence_signal() must be called+once the fence is passed. This could be used to enable some irq that would+not normally be enabled, etc, so that the CPU is woken once the fence condition+has arrived.+++dma-buf-mgr overview+--------------------++dma-buf-mgr is a reservation manager, and it is used to handle the case where+multiple devices want to access multiple dma-bufs in an arbitrary order, it+uses dma-fences for synchronization. There are 3 steps that are important here:++1. Reservation of all dma-buf buffers with dma-buf-mgr+ - Create a struct dmabufmgr_validate for each one with a call to+ dmabufmgr_validate_init()+ - Reserve the list with dmabufmgr_reserve_buffers()+2. Queueing waits and allocating a new dma-fence+ - dmabufmgr_wait_completed_cpu or custom implementation.+ * Custom implementation can use dma_fence_wait, dma_fence_add_callback+ or a custom method that would depend on the fence type.+ * An implementation that uses dma_fence_add_callback can use the+ refcounting of dmabufmgr_validate to do signal completion, when+ the original list head is empty, all fences would have been signaled,+ and the command sequence can start running. This requires a custom put.+ - dma_fence_create, dma_seqno_fence_init or custom implementation+ that calls __dma_fence_init.+3. Committing with the new dma-fence.+ - dmabufmgr_fence_buffer_objects+ - reduce refcount of list by 1 with dmabufmgr_validate_put or custom put.++The waits queued in step 2 don't have to be completed before commit, this+allows users of dma-buf-mgr to prevent stalls for as long as possible.+++dma-fence operations+--------------------++dma_fence_get() increments the refcount on a dma-fence by 1.+dma_fence_put() decrements the refcount by 1.+ Each dma-buf the dma-fence is attached to will also hold a reference to the+ dma-fence, but this can will be removed by dma-buf-mgr upon committing a+ reservation.++dma_fence_ops.enable_signaling()+ Indicates dma_fence_signal will have to be called, any error code returned+ will cause the fence to be signaled. On success, if the dma_fence creator+ didn't already hold a refcount, it should increase the refcount, and+ decrease it after calling dma_fence_signal.++dma_fence_ops.release()+ Can be NULL, this function allows additional commands to run on destruction+ of the dma_fence.++dma_fence_signal()+ Signal completion for software callbacks on a dma-fence, this will unblock+ dma_fence_wait() calls and run all the callbacks added with+ dma_fence_add_callback().++dma_fence_wait()+ Do a synchronous wait on this dma-fence. It is assumed the caller directly+ or indirectly (dma-buf-mgr between reservation and committing) holds a+ reference to the dma-fence, otherwise the dma-fence might be freed+ before return, resulting in undefined behavior.++dma_fence_add_callback()+ Add a software callback to the dma-fence. Same restrictions apply to+ refcount as it does to dma_fence_wait, however the caller doesn't need to+ keep a refcount to dma-fence afterwards: when software access is enabled,+ the creator of the dma-fence is required to keep the fence alive until+ after it signals with dma_fence_signal. The callback itself can be called+ from irq context.++ This function returns -EINVAL if an input parameter is NULL, or -ENOENT+ if the fence was already signaled.++ *WARNING*:+ Cancelling a callback should only be done if you really know what you're+ doing, since deadlocks and race conditions could occur all too easily. For+ this reason, it should only ever be done on hardware lockup recovery.++dma_fence_create()+ Create a software only fence, the creator must keep its reference until+ after it calls dma_fence_signal.++__dma_fence_init()+ Initializes an allocated fence, the caller doesn't have to keep its+ refcount after committing with this fence, but it will need to hold a+ refcount again if dma_fence_ops.enable_signaling gets called. This can+ be used for other implementing other types of dma_fence.++dma_seqno_fence_init()+ Initializes a dma_seqno_fence, the caller will need to be able to+ enable software completion, but it also completes when+ (s32)((sync_buf)[seqno_ofs] - seqno) >= 0 is true.++ The dma_seqno_fence will take a refcount on sync_buf until it's destroyed.++ Certain hardware have instructions to insert this type of wait condition+ in the command stream, so no intervention from software would be needed.+ This type of fence can be destroyed before completed, however a reference+ on the sync_buf dma-buf can be taken. It is encouraged to re-use the same+ dma-buf, since mapping or unmapping the sync_buf to the device's vm can be+ expensive.+++dma-buf-mgr operations+----------------------++dmabufmgr_validate_init()+ Initialize a struct dmabufmgr_validate for use with dmabufmgr methods, and+ appends it to the list.++dmabufmgr_validate_get()+dmabufmgr_validate_put()+ Decrease or increase a reference to a dmabufmgr_validate, these are+ convenience functions and don't have to be used. The dmabufmgr commands+ below will never touch the refcount.++dmabufmgr_reserve_buffers()+ Attempts to reserve a list of dmabufmgr_validate. This function does not+ decrease or increase refcount on dmabufmgr_validate.++ When this command returns 0 (success), the following+ dmabufmgr_validate members become valid:+ num_fences, fences[0...num_fences)++ The caller will have to queue waits on those fences before calling+ dmabufmgr_fence_buffer_objects, dma_fence_add_callback will keep+ the fence alive until it is signaled.++ As such, by incrementing refcount on dmabufmgr_validate before calling+ dma_fence_add_callback, and making the callback decrement refcount on+ dmabufmgr_validate, or releasing refcount if dma_fence_add_callback+ failed, the dmabufmgr_validate would be freed when all the fences+ have been signaled, and only after the last ref is released, which should+ be after dmabufmgr_fence_buffer_objects. With proper locking, when the+ list_head holding the list of dmabufmgr_validate's becomes empty it+ indicates all fences for all dma-bufs have been signaled.++dmabufmgr_backoff_reservation()+ Unreserves a list of dmabufmgr_validate's, after dmabufmgr_reserve_buffers+ was called. This function does not decrease or increase refcount on+ dmabufmgr_validate.++dmabufmgr_fence_buffer_objects()+ Commits the list of dmabufmgr_validate's with the dma-fence specified.+ This should be done after dmabufmgr_reserve_buffers was called succesfully.+ dmabufmgr_backoff_reservation doesn't need to be called after this.+ This function does not decrease or increase refcount on dmabufmgr_validate.++dmabufmgr_wait_completed_cpu()+ Will block until all dmabufmgr_validate's have been completed, a signal+ has been received, or the wait timed out. This is a convenience function+ to speed up initial implementations, however since this blocks+ synchronously this is not the best way to wait.+ Can be called after dmabufmgr_reserve_buffers returned, but before+ dmabufmgr_backoff_reservation or dmabufmgr_fence_buffer_objects.diff --git a/drivers/base/Makefile b/drivers/base/Makefileindex 6e9f217..f11d40f 100644--- a/drivers/base/Makefile+++ b/drivers/base/Makefile@@ -10,7 +10,7 @@ obj-$(CONFIG_CMA) += dma-contiguous.o obj-y += power/ obj-$(CONFIG_HAS_DMA) += dma-mapping.o obj-$(CONFIG_HAVE_GENERIC_DMA_COHERENT) += dma-coherent.o-obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o dma-fence.o+obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o dma-fence.o dma-buf-mgr.o obj-$(CONFIG_ISA) += isa.o obj-$(CONFIG_FW_LOADER) += firmware_class.o obj-$(CONFIG_NUMA) += node.odiff --git a/drivers/base/dma-buf-mgr.c b/drivers/base/dma-buf-mgr.cnew file mode 100644index 0000000..899a99b--- /dev/null+++ b/drivers/base/dma-buf-mgr.c@@ -0,0 +1,277 @@+/*+ * Copyright (C) 2012 Canonical Ltd+ *+ * Based on ttm_bo.c which bears the following copyright notice,+ * but is dual licensed:+ *+ * Copyright (c) 2006-2009 VMware, Inc., Palo Alto, CA., USA+ * All Rights Reserved.+ *+ * Permission is hereby granted, free of charge, to any person obtaining a+ * copy of this software and associated documentation files (the+ * "Software"), to deal in the Software without restriction, including+ * without limitation the rights to use, copy, modify, merge, publish,+ * distribute, sub license, and/or sell copies of the Software, and to+ * permit persons to whom the Software is furnished to do so, subject to+ * the following conditions:+ *+ * The above copyright notice and this permission notice (including the+ * next paragraph) shall be included in all copies or substantial portions+ * of the Software.+ *+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,+ * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL+ * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM,+ * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR+ * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE+ * USE OR OTHER DEALINGS IN THE SOFTWARE.+ *+ **************************************************************************/+/*+ * Authors: Thomas Hellstrom <thellstrom-at-vmware-dot-com>+ */+++#include <linux/dma-buf-mgr.h>+#include <linux/export.h>+#include <linux/sched.h>+#include <linux/slab.h>++static void dmabufmgr_backoff_reservation_locked(struct list_head *list)+{+ struct dmabufmgr_validate *entry;++ list_for_each_entry(entry, list, head) {+ struct dma_buf *bo = entry->bo;+ if (!entry->reserved)+ continue;+ entry->reserved = false;++ entry->num_fences = 0;++ atomic_set(&bo->reserved, 0);+ wake_up_all(&bo->event_queue);+ }+}++static int+dmabufmgr_wait_unreserved_locked(struct list_head *list,+ struct dma_buf *bo)+{+ int ret;++ spin_unlock(&dma_buf_reserve_lock);+ ret = dma_buf_wait_unreserved(bo, true);+ spin_lock(&dma_buf_reserve_lock);+ if (unlikely(ret != 0))+ dmabufmgr_backoff_reservation_locked(list);+ return ret;+}++/**+ * dmabufmgr_backoff_reservation - cancel a reservation+ * @list: [in] a linked list of struct dmabufmgr_validate+ *+ * This function cancels a previous reservation done by+ * dmabufmgr_reserve_buffers. This is useful in case something+ * goes wrong between reservation and committing.+ *+ * Please read Documentation/dma-buf-synchronization.txt+ */+void+dmabufmgr_backoff_reservation(struct list_head *list)+{+ if (list_empty(list))+ return;++ spin_lock(&dma_buf_reserve_lock);+ dmabufmgr_backoff_reservation_locked(list);+ spin_unlock(&dma_buf_reserve_lock);+}+EXPORT_SYMBOL_GPL(dmabufmgr_backoff_reservation);++/**+ * dmabufmgr_reserve_buffers - reserve a list of dmabufmgr_validate+ * @list: [in] a linked list of struct dmabufmgr_validate+ *+ * Please read Documentation/dma-buf-synchronization.txt+ */+int+dmabufmgr_reserve_buffers(struct list_head *list)+{+ struct dmabufmgr_validate *entry;+ int ret;+ u32 val_seq;++ if (list_empty(list))+ return 0;++ list_for_each_entry(entry, list, head) {+ entry->reserved = false;+ entry->num_fences = 0;+ }++retry:+ spin_lock(&dma_buf_reserve_lock);+ val_seq = atomic_inc_return(&dma_buf_reserve_counter);++ list_for_each_entry(entry, list, head) {+ struct dma_buf *bo = entry->bo;++retry_this_bo:+ ret = dma_buf_reserve_locked(bo, true, true, true, val_seq);+ switch (ret) {+ case 0:+ break;+ case -EBUSY:+ ret = dmabufmgr_wait_unreserved_locked(list, bo);+ if (unlikely(ret != 0)) {+ spin_unlock(&dma_buf_reserve_lock);+ return ret;+ }+ goto retry_this_bo;+ case -EAGAIN:+ dmabufmgr_backoff_reservation_locked(list);+ spin_unlock(&dma_buf_reserve_lock);+ ret = dma_buf_wait_unreserved(bo, true);+ if (unlikely(ret != 0))+ return ret;+ goto retry;+ default:+ dmabufmgr_backoff_reservation_locked(list);+ spin_unlock(&dma_buf_reserve_lock);+ return ret;+ }++ entry->reserved = true;++ if (entry->shared &&+ bo->fence_shared_count == DMA_BUF_MAX_SHARED_FENCE) {+ WARN_ON_ONCE(1);+ dmabufmgr_backoff_reservation_locked(list);+ spin_unlock(&dma_buf_reserve_lock);+ return -EINVAL;+ }++ if (!entry->shared && bo->fence_shared_count) {+ entry->num_fences = bo->fence_shared_count;++ BUILD_BUG_ON(sizeof(entry->fences) !=+ sizeof(bo->fence_shared));++ memcpy(entry->fences, bo->fence_shared,+ sizeof(bo->fence_shared));+ } else if (bo->fence_excl) {+ entry->num_fences = 1;+ entry->fences[0] = bo->fence_excl;+ } else+ entry->num_fences = 0;+ }+ spin_unlock(&dma_buf_reserve_lock);++ return 0;+}+EXPORT_SYMBOL_GPL(dmabufmgr_reserve_buffers);++/**+ * dmabufmgr_wait_completed_cpu - wait synchronously for completion on cpu+ * @list: [in] a linked list of struct dmabufmgr_validate+ * @intr: [in] perform an interruptible wait+ * @timeout: [in] absolute timeout in jiffies+ *+ * Since this function waits synchronously it is meant mostly for cases where+ * stalling is unimportant, or to speed up initial implementations.+ */+int+dmabufmgr_wait_completed_cpu(struct list_head *list, bool intr,+ unsigned long timeout)+{+ struct dmabufmgr_validate *entry;+ int i, ret = 0;++ list_for_each_entry(entry, list, head) {+ for (i = 0; i < entry->num_fences && !ret; i++)+ ret = dma_fence_wait(entry->fences[i], intr, timeout);++ if (ret && ret != -ERESTARTSYS)+ pr_err("waiting returns %i\n", ret);+ if (ret)+ return ret;+ }+ return 0;+}+EXPORT_SYMBOL_GPL(dmabufmgr_wait_completed_cpu);++/**+ * dmabufmgr_fence_buffer_objects - commit a reservation with a new fence+ * @fence: [in] the fence that indicates completion+ * @list: [in] a linked list of struct dmabufmgr_validate+ *+ * This function should be called after a hardware command submission is+ * completed succesfully. The fence is used to indicate completion of+ * those commands.+ *+ * Please read Documentation/dma-buf-synchronization.txt+ */+void+dmabufmgr_fence_buffer_objects(struct dma_fence *fence, struct list_head *list)+{+ struct dmabufmgr_validate *entry;+ struct dma_buf *bo;++ if (list_empty(list) || WARN_ON(!fence))+ return;++ /* Until deferred fput hits mainline, release old things here */+ list_for_each_entry(entry, list, head) {+ bo = entry->bo;++ if (!entry->shared) {+ int i;+ for (i = 0; i < bo->fence_shared_count; ++i) {+ dma_fence_put(bo->fence_shared[i]);+ bo->fence_shared[i] = NULL;+ }+ bo->fence_shared_count = 0;+ if (bo->fence_excl) {+ dma_fence_put(bo->fence_excl);+ bo->fence_excl = NULL;+ }+ }++ entry->reserved = false;+ }++ spin_lock(&dma_buf_reserve_lock);++ list_for_each_entry(entry, list, head) {+ bo = entry->bo;++ dma_fence_get(fence);+ if (entry->shared)+ bo->fence_shared[bo->fence_shared_count++] = fence;+ else+ bo->fence_excl = fence;++ dma_buf_unreserve_locked(bo);+ }++ spin_unlock(&dma_buf_reserve_lock);+}+EXPORT_SYMBOL_GPL(dmabufmgr_fence_buffer_objects);++/**+ * dmabufmgr_validate_free - simple free function for dmabufmgr_validate+ * @ref: [in] pointer to dmabufmgr_validate::refcount to free+ *+ * Can be called when refcount drops to 0, but isn't required to be used.+ */+void dmabufmgr_validate_free(struct kref *ref)+{+ struct dmabufmgr_validate *val;+ val = container_of(ref, struct dmabufmgr_validate, refcount);+ list_del(&val->head);+ kfree(val);+}+EXPORT_SYMBOL_GPL(dmabufmgr_validate_free);diff --git a/drivers/base/dma-buf.c b/drivers/base/dma-buf.cindex 24e88fe..a19a518 100644--- a/drivers/base/dma-buf.c+++ b/drivers/base/dma-buf.c@@ -25,14 +25,20 @@ #include <linux/fs.h> #include <linux/slab.h> #include <linux/dma-buf.h>+#include <linux/dma-fence.h> #include <linux/anon_inodes.h> #include <linux/export.h>+#include <linux/sched.h>++atomic_t dma_buf_reserve_counter = ATOMIC_INIT(1);+DEFINE_SPINLOCK(dma_buf_reserve_lock);