Re: [PATCH nouveau 06/11] platform: complete the power up/down sequence

From: Vince Hsu
Date: Tue Jan 06 2015 - 04:34:15 EST



On 01/05/2015 11:25 PM, Thierry Reding wrote:
* PGP Signed by an unknown key

On Thu, Dec 25, 2014 at 10:42:58AM +0800, Vince Hsu wrote:
On 12/24/2014 09:23 PM, Lucas Stach wrote:
Am Dienstag, den 23.12.2014, 18:39 +0800 schrieb Vince Hsu:
This patch adds some missing pieces of the rail gaing/ungating sequence that
can improve the stability in theory.

Signed-off-by: Vince Hsu <vinceh@xxxxxxxxxx>
---
drm/nouveau_platform.c | 42 ++++++++++++++++++++++++++++++++++++++++++
drm/nouveau_platform.h | 3 +++
2 files changed, 45 insertions(+)

diff --git a/drm/nouveau_platform.c b/drm/nouveau_platform.c
index 68788b17a45c..527fe2358fc9 100644
--- a/drm/nouveau_platform.c
+++ b/drm/nouveau_platform.c
@@ -25,9 +25,11 @@
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/of.h>
+#include <linux/of_platform.h>
#include <linux/reset.h>
#include <linux/regulator/consumer.h>
#include <soc/tegra/fuse.h>
+#include <soc/tegra/mc.h>
#include <soc/tegra/pmc.h>
#include "nouveau_drm.h"
@@ -61,6 +63,9 @@ static int nouveau_platform_power_up(struct nouveau_platform_gpu *gpu)
reset_control_deassert(gpu->rst);
udelay(10);
+ tegra_mc_flush(gpu->mc, gpu->swgroup, false);
+ udelay(10);
+
return 0;
err_clamp:
@@ -77,6 +82,14 @@ static int nouveau_platform_power_down(struct nouveau_platform_gpu *gpu)
{
int err;
+ tegra_mc_flush(gpu->mc, gpu->swgroup, true);
+ udelay(10);
+
+ err = tegra_powergate_gpu_set_clamping(true);
+ if (err)
+ return err;
+ udelay(10);
+
reset_control_assert(gpu->rst);
udelay(10);
@@ -91,6 +104,31 @@ static int nouveau_platform_power_down(struct nouveau_platform_gpu *gpu)
return 0;
}
+static int nouveau_platform_get_mc(struct device *dev,
+ struct tegra_mc **mc, unsigned int *swgroup)
Uhm, no. If this is needed this has to be a Tegra MC function and not
burried into nouveau code. You are using knowledge about the internal
workings of the MC driver here.

Also this should probably only take the Dt node pointer as argument and
return a something like a tegra_mc_client struct that contains both the
MC device pointer and the swgroup so you can pass that to
tegra_mc_flush().
Good idea. I will have something as below in V2 if there is no other
comments for this.

tegra_mc_client *tegra_mc_find_client(struct device_node *node)
{
...
ret = of_parse_phandle_with_args(node, "nvidia,memory-client", ...)
...
}

There were some discussion about this few weeks ago. I'm not sure whether we
have some conclusion/implementation though. Thierry?

http://lists.infradead.org/pipermail/linux-arm-kernel/2014-December/308703.html
I don't think client is a good fit here. Flushing is done per SWGROUP
(on all clients of the SWGROUP). So I think we'll want something like:

gpu@0,57000000 {
...
nvidia,swgroup = <&mc TEGRA_SWGROUP_GPU>;
...
};

In the DT and return a struct tegra_mc_swgroup along the lines of:

struct tegra_mc_client {
unsigned int id;
unsigned int swgroup;

struct list_head list;
};

struct tegra_mc_swgroup {
struct list_head clients;
unsigned int id;
};

Where tegra_mc_swgroup.clients is a list of struct tegra_mc_client
structures, each representing a memory client pertaining to the
SWGROUP.
Based on your suggestion above, I created a struct tegra_mc_swgroup:

struct tegra_mc_swgroup {
unsigned int id;
struct tegra_mc *mc;
struct list_head head;
struct list_head clients;
};

And added the list head in the struct tegra_mc_soc.

struct tegra_mc_soc {
struct tegra_mc_client *clients;
unsigned int num_clients;

struct tegra_mc_hr *hr_clients;
unsigned int num_hr_clients;

struct list_head swgroups;
...

Created one function to build the swgroup list.

static int tegra_mc_build_swgroup(struct tegra_mc *mc)
{
int i;

for (i = 0; i < mc->soc->num_clients; i++) {
struct tegra_mc_swgroup *sg;
bool found = false;

list_for_each_entry(sg, &mc->soc->swgroups, head) {
if (sg->id == mc->soc->clients[i].swgroup) {
found = true;
break;
}
}

if (!found) {
sg = devm_kzalloc(mc->dev, sizeof(*sg), GFP_KERNEL);
if (!sg)
return -ENOMEM;

sg->id = mc->soc->clients[i].swgroup;
sg->mc = mc;
list_add_tail(&sg->head, &mc->soc->swgroups);
INIT_LIST_HEAD(&sg->clients);
}

list_add_tail(&mc->soc->clients[i].head, &sg->clients);
}

return 0;
}


We probably don't want to expose these structures publicly, an opaque
type should be enough. Then you can use functions like:

struct tegra_mc_swgroup *tegra_mc_find_swgroup(struct device_node *node);
And then I can use the tegra_find_swgroup() in GK20A driver to get the swgroup
and flush the memory clients by tegra_mc_flush(swgroup).

One problem is that the mc_soc and mc_clients are are defined as const. To build
the swgroup list dynamically, I have to discard the const. I guess you won't like that. :(

Thanks,
Vince

At some point we may even need something like:

struct tegra_mc_client *tegra_mc_find_client(struct device_node *node,
const char *name);

And DT content like this:

gpu@0,57000000 {
...
nvidia,memory-clients = <&mc 0x58>, <&mc 0x59>;
nvidia,memory-client-names = "read", "write";
...
};

This could be useful for latency allowance programming, but we can cross
that bridge when we come to it.

Thierry

* Unknown Key
* 0x7F3EB3A1

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/