As part of the nvmap_refactoring, add nvmap_alloc.h file which include
declaration for functions which are exposed by nvmap_alloc unit to other
units. Also, add nvmap_alloc_int.h file which include declaration for
functions which are internal to nvmap_alloc unit that can be called by
files within nvmap_alloc unit.
JIRA TMM-5621
Change-Id: Ie30e5e8a4f87591eb9c49a0a349f837a22726fa5
Signed-off-by: Ketan Patil <ketanp@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nv-oot/+/3198546
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
For bigger buffer allocation (e.g. 4GB, 5GB etc) from IOMMU heap, 70% of
the total time is consumed in cache cleaning. CUDA team confirmed that,
it is not always necessary to clean the CPU cache during allocation
flow. Hence provide an option to users of libnvrm_mem to skip cache
cleaning whenever required.
Bug 4628529
Change-Id: I9f4cdc930fcc673b69344f0167c8bc1378ec8d61
Signed-off-by: Ketan Patil <ketanp@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nv-oot/+/3192376
Reviewed-by: Sachin Nikam <snikam@nvidia.com>
Reviewed-by: Pritesh Raithatha <praithatha@nvidia.com>
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
Add support for numa aware nvmap heaps:
- Add carveout node for gpu1 which is gpu carveout on numa node 1.
- Add numa_node_id property in nvmap_heap and nvmap platform carveout
structures to hold numa id info i.e. numa node on which this heap is
created.
- gpu0 and gpu1 would have same heap bit but different numa node ids.
- Update buffer allocation function: If user specify the allocate from
a particular numa node instance of the heap, then allocate from that
particular instance. By default input to numa node id is NUMA_NO_NODE,
so in this case, iterate over heaps on all numa nodes to satisfy the
allocation request e.g. if user specify to allocate from gpu carveout
without specifying any particular numa node, then iterate over all gpu
carveouts on all numa nodes, whichever has sufficient free memory,
allocate from thatheap instance.
- Update debugfs functions to pass heap type and numa id, so that
debugfs info is fetched from correct heap instance.
Bug 4231517
Change-Id: I77ba4b626546003ea3c40d09351d832100596d9a
Signed-off-by: Ketan Patil <ketanp@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nv-oot/+/3003219
Reviewed-by: Pritesh Raithatha <praithatha@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
Compression carveout is not correct carveout name as it will be used
by gpu even for non-compression usecases. Hence rename it to gpu
carveout.
Bug 3956637
Change-Id: I802b91d58d9ca120e34655c21f56c0da8c8cf677
Signed-off-by: Ketan Patil <ketanp@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nv-oot/+/2955536
Reviewed-by: Pritesh Raithatha <praithatha@nvidia.com>
CBC carveout is not the correct carveout name, rather compression
carveout is the correct name. Compresssion carveout is used to store the
gpu compressible buffers. While the comptags and other metadata related
to these buffers will be stored in CBC carveout which is a GSC carveout
not created/maintained by nvmap. Hence update carveout naming.
Bug 3956637
Change-Id: I50e01a6a8d8bda66960bdee378a12fde176b682a
Signed-off-by: Ketan Patil <ketanp@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nv-oot/+/2888397
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: svcacv <svcacv@nvidia.com>
Reviewed-by: Krishna Reddy <vdumpa@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>