*Re: [PATCH v2 01/29] nvmem: add support for cell lookups
2018-08-27 13:37 ` Bartosz Golaszewski
2018-08-27 14:01 ` Boris Brezillon@ 2018-08-28 10:15 ` Srinivas Kandagatla
2018-08-28 11:56 ` Bartosz Golaszewski1 sibling, 1 reply; 92+ messages in thread
From: Srinivas Kandagatla @ 2018-08-28 10:15 UTC (permalink / raw)
To: Bartosz Golaszewski, Boris Brezillon
Cc: Andrew Lunn, linux-doc, Sekhar Nori, Bartosz Golaszewski,
linux-i2c, Mauro Carvalho Chehab, Rob Herring, Florian Fainelli,
Kevin Hilman, Richard Weinberger, Russell King, Marek Vasut,
Paolo Abeni, Dan Carpenter, Grygorii Strashko, David Lechner,
Arnd Bergmann, Sven Van Asbroeck, open list:MEMORY TECHNOLOGY...,
Linux-OMAP, Linux ARM, Ivan Khoronzhuk, Greg Kroah-Hartman,
Jonathan Corbet, Linux Kernel Mailing List, Lukas Wunner, Naren,
netdev, Alban Bedel, Andrew Morton, Brian Norris,
David Woodhouse, David S . Miller
On 27/08/18 14:37, Bartosz Golaszewski wrote:
> I didn't notice it before but there's a global list of nvmem cells
Bit of history here.
The global list of nvmem_cell is to assist non device tree based cell
lookups. These cell entries come as part of the non-dt providers
nvmem_config.
All the device tree based cell lookup happen dynamically on
request/demand, and all the cell definition comes from DT.
As of today NVMEM supports both DT and non DT usecase, this is much simpler.
Non dt cases have various consumer usecases.
1> Consumer is aware of provider name and cell details.
This is probably simple usecase where it can just use device based apis.
2> Consumer is not aware of provider name, its just aware of cell name.
This is the case where global list of cells are used.
> with each cell referencing its owner nvmem device. I'm wondering if
> this isn't some kind of inversion of ownership. Shouldn't each nvmem
> device have a separate list of nvmem cells owned by it? What happens
This is mainly done for use case where consumer does not have idea of
provider name or any details.
First thing non dt user should do is use "NVMEM device based consumer APIs"
ex: First get handle to nvmem device using its nvmem provider name by
calling nvmem_device_get(); and use nvmem_device_cell_read/write() apis.
Also am not 100% sure how would maintaining cells list per nvmem
provider would help for the intended purpose of global list?
> if we have two nvmem providers with the same names for cells? I'm
Yes, it would return the first instance.. which is a known issue.
Am not really sure this is a big problem as of today! but am open for
any better suggestions!
> asking because dev_id based lookup doesn't make sense if internally
> nvmem_cell_get_from_list() doesn't care about any device names (takes
> only the cell_id as argument).
As I said this is for non DT usecase where consumers are not aware of
provider details.
>
> This doesn't cause any trouble now since there are no users defining
> cells in nvmem_config - there are only DT users - but this must be
> clarified before I can advance with correctly implementing nvmem
> lookups.
DT users should not be defining this to start with! It's redundant and
does not make sense!
>
> BTW: of_nvmem_cell_get() seems to always allocate an nvmem_cell
> instance even if the cell for this node was already added to the nvmem
> device.
I hope you got the reason why of_nvmem_cell_get() always allocates new
instance for every get!!
thanks,
srini
^permalinkrawreply [flat|nested] 92+ messages in thread

*Re: [PATCH v2 01/29] nvmem: add support for cell lookups
2018-08-28 10:15 ` Srinivas Kandagatla@ 2018-08-28 11:56 ` Bartosz Golaszewski
2018-08-28 13:45 ` Srinivas Kandagatla0 siblings, 1 reply; 92+ messages in thread
From: Bartosz Golaszewski @ 2018-08-28 11:56 UTC (permalink / raw)
To: Srinivas Kandagatla
Cc: Boris Brezillon, Andrew Lunn, linux-doc, Sekhar Nori,
Bartosz Golaszewski, linux-i2c, Mauro Carvalho Chehab,
Rob Herring, Florian Fainelli, Kevin Hilman, Richard Weinberger,
Russell King, Marek Vasut, Paolo Abeni, Dan Carpenter,
Grygorii Strashko, David Lechner, Arnd Bergmann,
Sven Van Asbroeck, open list:MEMORY TECHNOLOGY...,
Linux-OMAP, Linux ARM, Ivan Khoronzhuk, Greg Kroah-Hartman,
Jonathan Corbet, Linux Kernel Mailing List, Lukas Wunner, Naren,
netdev, Alban Bedel, Andrew Morton, Brian Norris,
David Woodhouse, David S . Miller
2018-08-28 12:15 GMT+02:00 Srinivas Kandagatla <srinivas.kandagatla@linaro.org>:
>
> On 27/08/18 14:37, Bartosz Golaszewski wrote:
>>
>> I didn't notice it before but there's a global list of nvmem cells
>
>
> Bit of history here.
>
> The global list of nvmem_cell is to assist non device tree based cell
> lookups. These cell entries come as part of the non-dt providers
> nvmem_config.
>
> All the device tree based cell lookup happen dynamically on request/demand,
> and all the cell definition comes from DT.
>
Makes perfect sense.
> As of today NVMEM supports both DT and non DT usecase, this is much simpler.
>
> Non dt cases have various consumer usecases.
>
> 1> Consumer is aware of provider name and cell details.
> This is probably simple usecase where it can just use device based
> apis.
>
> 2> Consumer is not aware of provider name, its just aware of cell name.
> This is the case where global list of cells are used.
>
I would like to support an additional use case here: the provider is
generic and is not aware of its cells at all. Since the only way of
defining nvmem cells is through DT or nvmem_config, we lack a way to
allow machine code to define cells without the provider code being
aware.
>> with each cell referencing its owner nvmem device. I'm wondering if
>> this isn't some kind of inversion of ownership. Shouldn't each nvmem
>> device have a separate list of nvmem cells owned by it? What happens
>
> This is mainly done for use case where consumer does not have idea of
> provider name or any details.
>
It doesn't need to know the provider details, but in most subsystems
the core code associates such resources by dev_id and optional con_id
as Boris already said.
> First thing non dt user should do is use "NVMEM device based consumer APIs"
>
> ex: First get handle to nvmem device using its nvmem provider name by
> calling nvmem_device_get(); and use nvmem_device_cell_read/write() apis.
>
> Also am not 100% sure how would maintaining cells list per nvmem provider
> would help for the intended purpose of global list?
>
It would fix the use case where the consumer wants to use
nvmem_cell_get(dev, name) and two nvmem providers would have a cell
with the same name.
Next we could add a way to associate dev_ids with nvmem cells.
>> if we have two nvmem providers with the same names for cells? I'm
>
> Yes, it would return the first instance.. which is a known issue.
> Am not really sure this is a big problem as of today! but am open for any
> better suggestions!
>
Yes, I would like to rework nvmem a bit. I don't see any non-DT users
defining nvmem-cells using nvmem_config. I think that what we need is
a way of specifying cell config outside of nvmem providers in some
kind of structures. These tables would reference the provider by name
and define the cells. Then we would have an additional lookup
structure which would associate the consumer (by dev_id and con_id,
where dev_id could optionally be NULL and where we would fall back to
using con_id only) and the nvmem provider + cell together. Similarly
to how GPIO consumers are associated with the gpiochip and hwnum. How
does it sound?
>
>> asking because dev_id based lookup doesn't make sense if internally
>> nvmem_cell_get_from_list() doesn't care about any device names (takes
>> only the cell_id as argument).
>
>
> As I said this is for non DT usecase where consumers are not aware of
> provider details.
>
>>
>> This doesn't cause any trouble now since there are no users defining
>> cells in nvmem_config - there are only DT users - but this must be
>> clarified before I can advance with correctly implementing nvmem
>> lookups.
>
> DT users should not be defining this to start with! It's redundant and does
> not make sense!
>
Yes, this is what I said: we only seem to have DT users, so this API
is not used at the moment.
>>
>> BTW: of_nvmem_cell_get() seems to always allocate an nvmem_cell
>> instance even if the cell for this node was already added to the nvmem
>> device.
>
> I hope you got the reason why of_nvmem_cell_get() always allocates new
> instance for every get!!
I admit I didn't test it, but just from reading the code it seems like
in nvmem_cell_get() for DT-users we'll always get to
of_nvmem_cell_get() and in there we always end up calling line 873:
cell = kzalloc(sizeof(*cell), GFP_KERNEL);
There may be something I'm missing though.
>
> thanks,
> srini
BR
Bart
^permalinkrawreply [flat|nested] 92+ messages in thread

*Re: [PATCH v2 01/29] nvmem: add support for cell lookups
2018-08-28 11:56 ` Bartosz Golaszewski@ 2018-08-28 13:45 ` Srinivas Kandagatla
2018-08-28 14:41 ` Bartosz Golaszewski0 siblings, 1 reply; 92+ messages in thread
From: Srinivas Kandagatla @ 2018-08-28 13:45 UTC (permalink / raw)
To: Bartosz Golaszewski
Cc: Boris Brezillon, Andrew Lunn, linux-doc, Sekhar Nori,
Bartosz Golaszewski, linux-i2c, Mauro Carvalho Chehab,
Rob Herring, Florian Fainelli, Kevin Hilman, Richard Weinberger,
Russell King, Marek Vasut, Paolo Abeni, Dan Carpenter,
Grygorii Strashko, David Lechner, Arnd Bergmann,
Sven Van Asbroeck, open list:MEMORY TECHNOLOGY...,
Linux-OMAP, Linux ARM, Ivan Khoronzhuk, Greg Kroah-Hartman,
Jonathan Corbet, Linux Kernel Mailing List, Lukas Wunner, Naren,
netdev, Alban Bedel, Andrew Morton, Brian Norris,
David Woodhouse, David S . Miller
On 28/08/18 12:56, Bartosz Golaszewski wrote:
> 2018-08-28 12:15 GMT+02:00 Srinivas Kandagatla <srinivas.kandagatla@linaro.org>:
>>
>> On 27/08/18 14:37, Bartosz Golaszewski wrote:
>>>
>>> I didn't notice it before but there's a global list of nvmem cells
>>
>>
>> Bit of history here.
>>
>> The global list of nvmem_cell is to assist non device tree based cell
>> lookups. These cell entries come as part of the non-dt providers
>> nvmem_config.
>>
>> All the device tree based cell lookup happen dynamically on request/demand,
>> and all the cell definition comes from DT.
>>
>
> Makes perfect sense.
>
>> As of today NVMEM supports both DT and non DT usecase, this is much simpler.
>>
>> Non dt cases have various consumer usecases.
>>
>> 1> Consumer is aware of provider name and cell details.
>> This is probably simple usecase where it can just use device based
>> apis.
>>
>> 2> Consumer is not aware of provider name, its just aware of cell name.
>> This is the case where global list of cells are used.
>>
>
> I would like to support an additional use case here: the provider is
> generic and is not aware of its cells at all. Since the only way of
> defining nvmem cells is through DT or nvmem_config, we lack a way to
> allow machine code to define cells without the provider code being
> aware.
machine driver should be able to do
nvmem_device_get()
nvmem_add_cells()
currently this adds to the global cell list which is exactly like doing
it via nvmem_config.
>
>>> with each cell referencing its owner nvmem device. I'm wondering if
>>> this isn't some kind of inversion of ownership. Shouldn't each nvmem
>>> device have a separate list of nvmem cells owned by it? What happens
>>
>> This is mainly done for use case where consumer does not have idea of
>> provider name or any details.
>>
>
> It doesn't need to know the provider details, but in most subsystems
> the core code associates such resources by dev_id and optional con_id
> as Boris already said.
>
If dev_id here is referring to provider dev_id, then we already do that
using nvmem device apis, except in global cell list which makes dev_id
optional.
>> First thing non dt user should do is use "NVMEM device based consumer APIs"
>>
>> ex: First get handle to nvmem device using its nvmem provider name by
>> calling nvmem_device_get(); and use nvmem_device_cell_read/write() apis.
>>
>> Also am not 100% sure how would maintaining cells list per nvmem provider
>> would help for the intended purpose of global list?
>>
>
> It would fix the use case where the consumer wants to use
> nvmem_cell_get(dev, name) and two nvmem providers would have a cell
> with the same name.
There is no code to enforce duplicate checks, so this would just
decrease the chances rather than fixing the problem totally.
I guess this is same problem
Finding cell by name without dev_id would still be an issue, am not too
concerned about this ATM.
However, the idea of having cells per provider does sound good to me.
We should also maintain list of providers in core as a lookup in cases
where dev_id is null.
I did hack up a patch, incase you might want to try:
I did only compile test.
---------------------------------->cut<-------------------------------
Author: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
Date: Tue Aug 28 13:46:21 2018 +0100
nvmem: core: maintain per provider cell list
Having a global cell list could be a issue in cases where the
cell-id is same across multiple providers. Making the cell list specific
to provider could avoid such issue by adding additional checks while
addding cells.
Signed-off-by: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
index aa1657831b70..29da603f2fa4 100644
--- a/drivers/nvmem/core.c
+++ b/drivers/nvmem/core.c
@@ -40,6 +40,8 @@ struct nvmem_device {
struct device *base_dev;
nvmem_reg_read_t reg_read;
nvmem_reg_write_t reg_write;
+ struct list_head node;
+ struct list_head cells;
void *priv;
};
@@ -57,9 +59,7 @@ struct nvmem_cell {
static DEFINE_MUTEX(nvmem_mutex);
static DEFINE_IDA(nvmem_ida);
-
-static LIST_HEAD(nvmem_cells);
-static DEFINE_MUTEX(nvmem_cells_mutex);
+static LIST_HEAD(nvmem_devices);
#ifdef CONFIG_DEBUG_LOCK_ALLOC
static struct lock_class_key eeprom_lock_key;
@@ -284,26 +284,28 @@ static struct nvmem_device *of_nvmem_find(struct
device_node *nvmem_np)
static struct nvmem_cell *nvmem_find_cell(const char *cell_id)
{
- struct nvmem_cell *p;
+ struct nvmem_device *d;
- mutex_lock(&nvmem_cells_mutex);
-
- list_for_each_entry(p, &nvmem_cells, node)
- if (!strcmp(p->name, cell_id)) {
- mutex_unlock(&nvmem_cells_mutex);
- return p;
- }
+ mutex_lock(&nvmem_mutex);
+ list_for_each_entry(d, &nvmem_devices, node) {
+ struct nvmem_cell *p;
+ list_for_each_entry(p, &d->cells, node)
+ if (!strcmp(p->name, cell_id)) {
+ mutex_unlock(&nvmem_mutex);
+ return p;
+ }
+ }
- mutex_unlock(&nvmem_cells_mutex);
+ mutex_unlock(&nvmem_mutex);
return NULL;
}
static void nvmem_cell_drop(struct nvmem_cell *cell)
{
- mutex_lock(&nvmem_cells_mutex);
+ mutex_lock(&nvmem_mutex);
list_del(&cell->node);
- mutex_unlock(&nvmem_cells_mutex);
+ mutex_unlock(&nvmem_mutex);
kfree(cell);
}
@@ -312,18 +314,18 @@ static void nvmem_device_remove_all_cells(const
struct nvmem_device *nvmem)
struct nvmem_cell *cell;
struct list_head *p, *n;
- list_for_each_safe(p, n, &nvmem_cells) {
+ list_for_each_safe(p, n, &nvmem->cells) {
cell = list_entry(p, struct nvmem_cell, node);
if (cell->nvmem == nvmem)
nvmem_cell_drop(cell);
}
}
-static void nvmem_cell_add(struct nvmem_cell *cell)
+static void nvmem_cell_add(struct nvmem_device *nvmem, struct
nvmem_cell *cell)
{
- mutex_lock(&nvmem_cells_mutex);
- list_add_tail(&cell->node, &nvmem_cells);
- mutex_unlock(&nvmem_cells_mutex);
+ mutex_lock(&nvmem_mutex);
+ list_add_tail(&cell->node, &nvmem->cells);
+ mutex_unlock(&nvmem_mutex);
}
static int nvmem_cell_info_to_nvmem_cell(struct nvmem_device *nvmem,
@@ -385,7 +387,7 @@ int nvmem_add_cells(struct nvmem_device *nvmem,
goto err;
}
- nvmem_cell_add(cells[i]);
+ nvmem_cell_add(nvmem, cells[i]);
}
/* remove tmp array */
@@ -519,6 +521,10 @@ struct nvmem_device *nvmem_register(const struct
nvmem_config *config)
if (config->cells)
nvmem_add_cells(nvmem, config->cells, config->ncells);
+ mutex_lock(&nvmem_mutex);
+ list_add_tail(&nvmem->node, &nvmem_devices);
+ mutex_unlock(&nvmem_mutex);
+
return nvmem;
err_device_del:
@@ -544,6 +550,8 @@ int nvmem_unregister(struct nvmem_device *nvmem)
mutex_unlock(&nvmem_mutex);
return -EBUSY;
}
+
+ list_del(&nvmem->node);
mutex_unlock(&nvmem_mutex);
if (nvmem->flags & FLAG_COMPAT)
@@ -899,7 +907,7 @@ struct nvmem_cell *of_nvmem_cell_get(struct
device_node *np,
goto err_sanity;
}
- nvmem_cell_add(cell);
+ nvmem_cell_add(nvmem, cell);
return cell;
---------------------------------->cut<-------------------------------
>
> Next we could add a way to associate dev_ids with nvmem cells.
>
>>> if we have two nvmem providers with the same names for cells? I'm
>>
>> Yes, it would return the first instance.. which is a known issue.
>> Am not really sure this is a big problem as of today! but am open for any
>> better suggestions!
>>
>
> Yes, I would like to rework nvmem a bit. I don't see any non-DT users
> defining nvmem-cells using nvmem_config. I think that what we need is
> a way of specifying cell config outside of nvmem providers in some
> kind of structures. These tables would reference the provider by name
> and define the cells. Then we would have an additional lookup
> structure which would associate the consumer (by dev_id and con_id,
> where dev_id could optionally be NULL and where we would fall back to
> using con_id only) and the nvmem provider + cell together. Similarly
> to how GPIO consumers are associated with the gpiochip and hwnum. How
> does it sound?
Yes, sounds good.
Correct me if am wrong!
You should be able to add the new cells using struct nvmem_cell_info and
add them to particular provider using nvmem_add_cells().
Sounds like thats exactly what nvmem_add_lookup_table() would look like.
We should add new nvmem_device_cell_get(nvmem, conn_id) which would
return nvmem cell which is specific to the provider. This cell can be
used by the machine driver to read/write.
>>>
>>> BTW: of_nvmem_cell_get() seems to always allocate an nvmem_cell
>>> instance even if the cell for this node was already added to the nvmem
>>> device.
>>
>> I hope you got the reason why of_nvmem_cell_get() always allocates new
>> instance for every get!!
>
>
> I admit I didn't test it, but just from reading the code it seems like
> in nvmem_cell_get() for DT-users we'll always get to
> of_nvmem_cell_get() and in there we always end up calling line 873:
> cell = kzalloc(sizeof(*cell), GFP_KERNEL);
>
That is correct, this cell is created when we do a get and release when
we do a put().
thanks,
srini
^permalinkrawreply [flat|nested] 92+ messages in thread

*Re: [PATCH v2 01/29] nvmem: add support for cell lookups
2018-08-28 14:41 ` Bartosz Golaszewski@ 2018-08-28 14:48 ` Srinivas Kandagatla
2018-08-28 14:53 ` Boris Brezillon1 sibling, 0 replies; 92+ messages in thread
From: Srinivas Kandagatla @ 2018-08-28 14:48 UTC (permalink / raw)
To: Bartosz Golaszewski
Cc: Boris Brezillon, Andrew Lunn, linux-doc, Sekhar Nori,
Bartosz Golaszewski, linux-i2c, Mauro Carvalho Chehab,
Rob Herring, Florian Fainelli, Kevin Hilman, Richard Weinberger,
Russell King, Marek Vasut, Paolo Abeni, Dan Carpenter,
Grygorii Strashko, David Lechner, Arnd Bergmann,
Sven Van Asbroeck, open list:MEMORY TECHNOLOGY...,
Linux-OMAP, Linux ARM, Ivan Khoronzhuk, Greg Kroah-Hartman,
Jonathan Corbet, Linux Kernel Mailing List, Lukas Wunner, Naren,
netdev, Alban Bedel, Andrew Morton, Brian Norris,
David Woodhouse, David S . Miller
On 28/08/18 15:41, Bartosz Golaszewski wrote:
> 2018-08-28 15:45 GMT+02:00 Srinivas Kandagatla <srinivas.kandagatla@linaro.org>:
>>
>>
...
>>> I would like to support an additional use case here: the provider is
>>> generic and is not aware of its cells at all. Since the only way of
>>> defining nvmem cells is through DT or nvmem_config, we lack a way to
>>> allow machine code to define cells without the provider code being
>>> aware.
>>
>>
>> machine driver should be able to do
>> nvmem_device_get()
>> nvmem_add_cells()
>>
>
> Indeed, I missed the fact that you can retrieve the nvmem device by
> name. Except that we cannot know that the nvmem provider has been
> registered yet when calling nvmem_device_get(). This could potentially
> be solved by my other patch that adds notifiers to nvmem, but it would
> require much more boilerplate code in every board file. I think that
> removing nvmem_cell_info from nvmem_config and having external cell
> definitions would be cleaner.
Yes, notifiers would work!
...
>>>
>>> Yes, I would like to rework nvmem a bit. I don't see any non-DT users
>>> defining nvmem-cells using nvmem_config. I think that what we need is
>>> a way of specifying cell config outside of nvmem providers in some
>>> kind of structures. These tables would reference the provider by name
>>> and define the cells. Then we would have an additional lookup
>>> structure which would associate the consumer (by dev_id and con_id,
>>> where dev_id could optionally be NULL and where we would fall back to
>>> using con_id only) and the nvmem provider + cell together. Similarly
>>> to how GPIO consumers are associated with the gpiochip and hwnum. How
>>> does it sound?
>>
>> Yes, sounds good.
>>
>> Correct me if am wrong!
>> You should be able to add the new cells using struct nvmem_cell_info and add
>> them to particular provider using nvmem_add_cells().
>>
>> Sounds like thats exactly what nvmem_add_lookup_table() would look like.
>>
>> We should add new nvmem_device_cell_get(nvmem, conn_id) which would return
>> nvmem cell which is specific to the provider. This cell can be used by the
>> machine driver to read/write.
>
> Except that we could do it lazily - when the nvmem provider actually
> gets registered instead of doing it right away and risking that the
> device isn't even there yet.
>
Yes, it makes more sense to do it once the provider is actually present!
>>
>>>>>
>>>>> BTW: of_nvmem_cell_get() seems to always allocate an nvmem_cell
>>>>> instance even if the cell for this node was already added to the nvmem
>>>>> device.
>>>>
>>>>
>>>> I hope you got the reason why of_nvmem_cell_get() always allocates new
>>>> instance for every get!!
>>>
>>>
>>>
>>> I admit I didn't test it, but just from reading the code it seems like
>>> in nvmem_cell_get() for DT-users we'll always get to
>>> of_nvmem_cell_get() and in there we always end up calling line 873:
>>> cell = kzalloc(sizeof(*cell), GFP_KERNEL);
>>>
>> That is correct, this cell is created when we do a get and release when we
>> do a put().
>>
>
> Shouldn't we add the cell to the list, and check first if it's there
> and only create it if not?
Yes I agree, duplicate entry checks are missing!
--srini
>
> Bart
>
^permalinkrawreply [flat|nested] 92+ messages in thread

*Re: [PATCH v2 01/29] nvmem: add support for cell lookups
2018-08-28 14:41 ` Bartosz Golaszewski
2018-08-28 14:48 ` Srinivas Kandagatla@ 2018-08-28 14:53 ` Boris Brezillon
2018-08-28 15:09 ` Srinivas Kandagatla1 sibling, 1 reply; 92+ messages in thread
From: Boris Brezillon @ 2018-08-28 14:53 UTC (permalink / raw)
To: Bartosz Golaszewski
Cc: Srinivas Kandagatla, Andrew Lunn, linux-doc, Sekhar Nori,
Bartosz Golaszewski, linux-i2c, Mauro Carvalho Chehab,
Rob Herring, Florian Fainelli, Kevin Hilman, Richard Weinberger,
Russell King, Marek Vasut, Paolo Abeni, Dan Carpenter,
Grygorii Strashko, David Lechner, Arnd Bergmann,
Sven Van Asbroeck, open list:MEMORY TECHNOLOGY...,
Linux-OMAP, Linux ARM, Ivan Khoronzhuk, Greg Kroah-Hartman,
Jonathan Corbet, Linux Kernel Mailing List, Lukas Wunner, Naren,
netdev, Alban Bedel, Andrew Morton, Brian Norris,
David Woodhouse, David S . Miller
On Tue, 28 Aug 2018 16:41:04 +0200
Bartosz Golaszewski <brgl@bgdev.pl> wrote:
> 2018-08-28 15:45 GMT+02:00 Srinivas Kandagatla <srinivas.kandagatla@linaro.org>:
> >
> >
> > On 28/08/18 12:56, Bartosz Golaszewski wrote:
> >>
> >> 2018-08-28 12:15 GMT+02:00 Srinivas Kandagatla
> >> <srinivas.kandagatla@linaro.org>:
> >>>
> >>>
> >>> On 27/08/18 14:37, Bartosz Golaszewski wrote:
> >>>>
> >>>>
> >>>> I didn't notice it before but there's a global list of nvmem cells
> >>>
> >>>
> >>>
> >>> Bit of history here.
> >>>
> >>> The global list of nvmem_cell is to assist non device tree based cell
> >>> lookups. These cell entries come as part of the non-dt providers
> >>> nvmem_config.
> >>>
> >>> All the device tree based cell lookup happen dynamically on
> >>> request/demand,
> >>> and all the cell definition comes from DT.
> >>>
> >>
> >> Makes perfect sense.
> >>
> >>> As of today NVMEM supports both DT and non DT usecase, this is much
> >>> simpler.
> >>>
> >>> Non dt cases have various consumer usecases.
> >>>
> >>> 1> Consumer is aware of provider name and cell details.
> >>> This is probably simple usecase where it can just use device
> >>> based
> >>> apis.
> >>>
> >>> 2> Consumer is not aware of provider name, its just aware of cell name.
> >>> This is the case where global list of cells are used.
> >>>
> >>
> >> I would like to support an additional use case here: the provider is
> >> generic and is not aware of its cells at all. Since the only way of
> >> defining nvmem cells is through DT or nvmem_config, we lack a way to
> >> allow machine code to define cells without the provider code being
> >> aware.
> >
> >
> > machine driver should be able to do
> > nvmem_device_get()
> > nvmem_add_cells()
> >
>
> Indeed, I missed the fact that you can retrieve the nvmem device by
> name. Except that we cannot know that the nvmem provider has been
> registered yet when calling nvmem_device_get(). This could potentially
> be solved by my other patch that adds notifiers to nvmem, but it would
> require much more boilerplate code in every board file. I think that
> removing nvmem_cell_info from nvmem_config and having external cell
> definitions would be cleaner.
I also vote for this option.
> >
> > static struct nvmem_cell *nvmem_find_cell(const char *cell_id)
Can we get rid of this function and just have the the version that
takes an nvmem_name and a cell_id.
> >> Yes, I would like to rework nvmem a bit. I don't see any non-DT users
> >> defining nvmem-cells using nvmem_config. I think that what we need is
> >> a way of specifying cell config outside of nvmem providers in some
> >> kind of structures. These tables would reference the provider by name
> >> and define the cells. Then we would have an additional lookup
> >> structure which would associate the consumer (by dev_id and con_id,
> >> where dev_id could optionally be NULL and where we would fall back to
> >> using con_id only) and the nvmem provider + cell together. Similarly
> >> to how GPIO consumers are associated with the gpiochip and hwnum. How
> >> does it sound?
> >
> > Yes, sounds good.
> >
> > Correct me if am wrong!
> > You should be able to add the new cells using struct nvmem_cell_info and add
> > them to particular provider using nvmem_add_cells().
> >
> > Sounds like thats exactly what nvmem_add_lookup_table() would look like.
> >
> > We should add new nvmem_device_cell_get(nvmem, conn_id) which would return
> > nvmem cell which is specific to the provider. This cell can be used by the
> > machine driver to read/write.
>
> Except that we could do it lazily - when the nvmem provider actually
> gets registered instead of doing it right away and risking that the
> device isn't even there yet.
And again, I agree with you. That's basically what lookup tables are
meant for: defining resources that are supposed to be attached to a
device when it's registered to a subsystem.
>
> >
> >>>>
> >>>> BTW: of_nvmem_cell_get() seems to always allocate an nvmem_cell
> >>>> instance even if the cell for this node was already added to the nvmem
> >>>> device.
> >>>
> >>>
> >>> I hope you got the reason why of_nvmem_cell_get() always allocates new
> >>> instance for every get!!
> >>
> >>
> >>
> >> I admit I didn't test it, but just from reading the code it seems like
> >> in nvmem_cell_get() for DT-users we'll always get to
> >> of_nvmem_cell_get() and in there we always end up calling line 873:
> >> cell = kzalloc(sizeof(*cell), GFP_KERNEL);
> >>
> > That is correct, this cell is created when we do a get and release when we
> > do a put().
> >
>
> Shouldn't we add the cell to the list, and check first if it's there
> and only create it if not?
Or even better: create the cells at registration time so that the
search code is the same for both DT and non-DT cases. Only the
registration would differ (with one path parsing the DT, and the other
one searching for nvmem cells defined with a nvmem-provider-lookup
table).
^permalinkrawreply [flat|nested] 92+ messages in thread

*Re: [PATCH v2 01/29] nvmem: add support for cell lookups
2018-08-28 14:53 ` Boris Brezillon@ 2018-08-28 15:09 ` Srinivas Kandagatla0 siblings, 0 replies; 92+ messages in thread
From: Srinivas Kandagatla @ 2018-08-28 15:09 UTC (permalink / raw)
To: Boris Brezillon, Bartosz Golaszewski
Cc: Andrew Lunn, linux-doc, Sekhar Nori, Bartosz Golaszewski,
linux-i2c, Mauro Carvalho Chehab, Rob Herring, Florian Fainelli,
Kevin Hilman, Richard Weinberger, Russell King, Marek Vasut,
Paolo Abeni, Dan Carpenter, Grygorii Strashko, David Lechner,
Arnd Bergmann, Sven Van Asbroeck, open list:MEMORY TECHNOLOGY...,
Linux-OMAP, Linux ARM, Ivan Khoronzhuk, Greg Kroah-Hartman,
Jonathan Corbet, Linux Kernel Mailing List, Lukas Wunner, Naren,
netdev, Alban Bedel, Andrew Morton, Brian Norris,
David Woodhouse, David S . Miller
On 28/08/18 15:53, Boris Brezillon wrote:
> On Tue, 28 Aug 2018 16:41:04 +0200
> Bartosz Golaszewski <brgl@bgdev.pl> wrote:
>
...
>
>>>
>>> static struct nvmem_cell *nvmem_find_cell(const char *cell_id)
>
> Can we get rid of this function and just have the the version that
> takes an nvmem_name and a cell_id.
That should be feasible!
>>>>>
>>>>> I hope you got the reason why of_nvmem_cell_get() always allocates new
>>>>> instance for every get!!
>>>>
>>>>
>>>>
>>>> I admit I didn't test it, but just from reading the code it seems like
>>>> in nvmem_cell_get() for DT-users we'll always get to
>>>> of_nvmem_cell_get() and in there we always end up calling line 873:
>>>> cell = kzalloc(sizeof(*cell), GFP_KERNEL);
>>>>
>>> That is correct, this cell is created when we do a get and release when we
>>> do a put().
>>>
>>
>> Shouldn't we add the cell to the list, and check first if it's there
>> and only create it if not?
>
> Or even better: create the cells at registration time so that the
> search code is the same for both DT and non-DT cases. Only the
> registration would differ (with one path parsing the DT, and the other
> one searching for nvmem cells defined with a nvmem-provider-lookup
> table).
Makes sense! and that would go very well with the plan of "nvmem-cell"
compatible for cells!.
>
^permalinkrawreply [flat|nested] 92+ messages in thread

*Re: [PATCH v2 06/29] mtd: Add support for reading MTD devices via the nvmem API
2018-08-19 11:31 ` Alban@ 2018-08-19 16:46 ` Boris Brezillon
2018-08-20 10:43 ` Srinivas Kandagatla
2018-08-20 22:53 ` Alban0 siblings, 2 replies; 92+ messages in thread
From: Boris Brezillon @ 2018-08-19 16:46 UTC (permalink / raw)
To: Alban, Srinivas Kandagatla
Cc: Bartosz Golaszewski, Jonathan Corbet, Sekhar Nori, Kevin Hilman,
Russell King, Arnd Bergmann, Greg Kroah-Hartman, David Woodhouse,
Brian Norris, Marek Vasut, Richard Weinberger, Grygorii Strashko,
David S . Miller, Naren, Mauro Carvalho Chehab, Andrew Morton,
Lukas Wunner, Dan Carpenter, Florian Fainelli, Ivan Khoronzhuk,
Sven Van Asbroeck, Paolo Abeni, Rob Herring, David Lechner,
Andrew Lunn, linux-doc, linux-kernel, linux-arm-kernel,
linux-i2c, linux-mtd, linux-omap, netdev, Bartosz Golaszewski
On Sun, 19 Aug 2018 13:31:06 +0200
Alban <albeu@free.fr> wrote:
> On Fri, 17 Aug 2018 18:27:20 +0200
> Boris Brezillon <boris.brezillon@bootlin.com> wrote:
>
> > Hi Bartosz,
> >
> > On Fri, 10 Aug 2018 10:05:03 +0200
> > Bartosz Golaszewski <brgl@bgdev.pl> wrote:
> >
> > > From: Alban Bedel <albeu@free.fr>
> > >
> > > Allow drivers that use the nvmem API to read data stored on MTD devices.
> > > For this the mtd devices are registered as read-only NVMEM providers.
> > >
> > > Signed-off-by: Alban Bedel <albeu@free.fr>
> > > [Bartosz:
> > > - use the managed variant of nvmem_register(),
> > > - set the nvmem name]
> > > Signed-off-by: Bartosz Golaszewski <bgolaszewski@baylibre.com>
> >
> > What happened to the 2 other patches of Alban's series? I'd really
> > like the DT case to be handled/agreed on in the same patchset, but
> > IIRC, Alban and Srinivas disagreed on how this should be represented.
> > I hope this time we'll come to an agreement, because the MTD <-> NVMEM
> > glue has been floating around for quite some time...
>
> These other patches were to fix what I consider a fundamental flaw in
> the generic NVMEM bindings, however we couldn't agree on this point.
> Bartosz later contacted me to take over this series and I suggested to
> just change the MTD NVMEM binding to use a compatible string on the
> NVMEM cells as an alternative solution to fix the clash with the old
> style MTD partition.
>
> However all this has no impact on the code needed to add NVMEM support
> to MTD, so the above patch didn't change at all.
It does have an impact on the supported binding though.
nvmem->dev.of_node is automatically assigned to mtd->dev.of_node, which
means people will be able to define their NVMEM cells directly under
the MTD device and reference them from other nodes (even if it's not
documented), and as you said, it conflict with the old MTD partition
bindings. So we'd better agree on this binding before merging this
patch.
I see several options:
1/ provide a way to tell the NVMEM framework not to use parent->of_node
even if it's != NULL. This way we really don't support defining
NVMEM cells in the DT, and also don't support referencing the nvmem
device using a phandle.
2/ define a new binding where all nvmem-cells are placed in an
"nvmem" subnode (just like we have this "partitions" subnode for
partitions), and then add a config->of_node field so that the
nvmem provider can explicitly specify the DT node representing the
nvmem device. We'll also need to set this field to ERR_PTR(-ENOENT)
in case this node does not exist so that the nvmem framework knows
that it should not assign nvmem->dev.of_node to parent->of_node
3/ only declare partitions as nvmem providers. This would solve the
problem we have with partitions defined in the DT since
defining sub-partitions in the DT is not (yet?) supported and
partition nodes are supposed to be leaf nodes. Still, I'm not a big
fan of this solution because it will prevent us from supporting
sub-partitions if we ever want/need to.
4/ Add a ->of_xlate() hook that would be called if present by the
framework instead of using the default parsing we have right now.
5/ Tell the nvmem framework the name of the subnode containing nvmem
cell definitions (if NULL that means cells are directly defined
under the nvmem provider node). We would set it to "nvmem-cells" (or
whatever you like) for the MTD case.
There are probably other options (some were proposed by Alban and
Srinivas already), but I'd like to get this sorted out before we merge
this patch.
Alban, Srinivas, any opinion?
^permalinkrawreply [flat|nested] 92+ messages in thread

*Re: [PATCH v2 06/29] mtd: Add support for reading MTD devices via the nvmem API
2018-08-19 16:46 ` Boris Brezillon@ 2018-08-20 10:43 ` Srinivas Kandagatla
2018-08-20 18:20 ` Boris Brezillon
2018-08-20 22:53 ` Alban1 sibling, 1 reply; 92+ messages in thread
From: Srinivas Kandagatla @ 2018-08-20 10:43 UTC (permalink / raw)
To: Boris Brezillon, Alban
Cc: Bartosz Golaszewski, Jonathan Corbet, Sekhar Nori, Kevin Hilman,
Russell King, Arnd Bergmann, Greg Kroah-Hartman, David Woodhouse,
Brian Norris, Marek Vasut, Richard Weinberger, Grygorii Strashko,
David S . Miller, Naren, Mauro Carvalho Chehab, Andrew Morton,
Lukas Wunner, Dan Carpenter, Florian Fainelli, Ivan Khoronzhuk,
Sven Van Asbroeck, Paolo Abeni, Rob Herring, David Lechner,
Andrew Lunn, linux-doc, linux-kernel, linux-arm-kernel,
linux-i2c, linux-mtd, linux-omap, netdev, Bartosz Golaszewski
Thanks Boris, for looking into this in more detail.
On 19/08/18 17:46, Boris Brezillon wrote:
> On Sun, 19 Aug 2018 13:31:06 +0200
> Alban <albeu@free.fr> wrote:
>
>> On Fri, 17 Aug 2018 18:27:20 +0200
>> Boris Brezillon <boris.brezillon@bootlin.com> wrote:
>>
>>> Hi Bartosz,
>>>
>>> On Fri, 10 Aug 2018 10:05:03 +0200
>>> Bartosz Golaszewski <brgl@bgdev.pl> wrote:
>>>
>>>> From: Alban Bedel <albeu@free.fr>
>>>>
>>>> Allow drivers that use the nvmem API to read data stored on MTD devices.
>>>> For this the mtd devices are registered as read-only NVMEM providers.
>>>>
>>>> Signed-off-by: Alban Bedel <albeu@free.fr>
>>>> [Bartosz:
>>>> - use the managed variant of nvmem_register(),
>>>> - set the nvmem name]
>>>> Signed-off-by: Bartosz Golaszewski <bgolaszewski@baylibre.com>
>>>
>>> What happened to the 2 other patches of Alban's series? I'd really
>>> like the DT case to be handled/agreed on in the same patchset, but
>>> IIRC, Alban and Srinivas disagreed on how this should be represented.
>>> I hope this time we'll come to an agreement, because the MTD <-> NVMEM
>>> glue has been floating around for quite some time...
>>
>> These other patches were to fix what I consider a fundamental flaw in
>> the generic NVMEM bindings, however we couldn't agree on this point.
>> Bartosz later contacted me to take over this series and I suggested to
>> just change the MTD NVMEM binding to use a compatible string on the
>> NVMEM cells as an alternative solution to fix the clash with the old
>> style MTD partition.
>>
>> However all this has no impact on the code needed to add NVMEM support
>> to MTD, so the above patch didn't change at all.
>
> It does have an impact on the supported binding though.
> nvmem->dev.of_node is automatically assigned to mtd->dev.of_node, which
> means people will be able to define their NVMEM cells directly under
> the MTD device and reference them from other nodes (even if it's not
> documented), and as you said, it conflict with the old MTD partition
> bindings. So we'd better agree on this binding before merging this
> patch.
>
Yes, I agree with you!
> I see several options:
>
> 1/ provide a way to tell the NVMEM framework not to use parent->of_node
> even if it's != NULL. This way we really don't support defining
> NVMEM cells in the DT, and also don't support referencing the nvmem
> device using a phandle.
>
Other options look much better than this one!
> 2/ define a new binding where all nvmem-cells are placed in an
> "nvmem" subnode (just like we have this "partitions" subnode for
> partitions), and then add a config->of_node field so that the
> nvmem provider can explicitly specify the DT node representing the
> nvmem device. We'll also need to set this field to ERR_PTR(-ENOENT)
> in case this node does not exist so that the nvmem framework knows
> that it should not assign nvmem->dev.of_node to parent->of_node
>
This one looks promising, One Question though..
Do we expect that there would be nvmem cells in any of the partitions?
or
nvmem cell are only valid for unpartioned area?
Am sure that the nvmem cells would be in multiple partitions, Is it okay
to have some parts of partition to be in a separate subnode?
I would like this case to be considered too.
> 3/ only declare partitions as nvmem providers. This would solve the
> problem we have with partitions defined in the DT since
> defining sub-partitions in the DT is not (yet?) supported and
> partition nodes are supposed to be leaf nodes. Still, I'm not a big
> fan of this solution because it will prevent us from supporting
> sub-partitions if we ever want/need to.
>
This one is going to come back so, its better we
> 4/ Add a ->of_xlate() hook that would be called if present by the
> framework instead of using the default parsing we have right now.
This looks much cleaner! We could hook that up under
__nvmem_device_get() to do that translation.
>
> 5/ Tell the nvmem framework the name of the subnode containing nvmem
> cell definitions (if NULL that means cells are directly defined
> under the nvmem provider node). We would set it to "nvmem-cells" (or
> whatever you like) for the MTD case.
Option 2 looks better than this.
>
> There are probably other options (some were proposed by Alban and
> Srinivas already), but I'd like to get this sorted out before we merge
> this patch.
>
> Alban, Srinivas, any opinion?
Overall am still not able to clear visualize on how MTD bindings with
nvmem cells would look in both partition and un-partition usecases?
An example DT would be nice here!!
Option 4 looks like much generic solution to me, may be we should try
this once bindings on MTD side w.r.t nvmem cells are decided.
Thanks,
Srini
>
^permalinkrawreply [flat|nested] 92+ messages in thread

*Re: [PATCH v2 06/29] mtd: Add support for reading MTD devices via the nvmem API
2018-08-19 16:46 ` Boris Brezillon
2018-08-20 10:43 ` Srinivas Kandagatla@ 2018-08-20 22:53 ` Alban
2018-08-21 5:44 ` Boris Brezillon1 sibling, 1 reply; 92+ messages in thread
From: Alban @ 2018-08-20 22:53 UTC (permalink / raw)
To: Boris Brezillon
Cc: Aban Bedel, Srinivas Kandagatla, Bartosz Golaszewski,
Jonathan Corbet, Sekhar Nori, Kevin Hilman, Russell King,
Arnd Bergmann, Greg Kroah-Hartman, David Woodhouse, Brian Norris,
Marek Vasut, Richard Weinberger, Grygorii Strashko,
David S . Miller, Naren, Mauro Carvalho Chehab, Andrew Morton,
Lukas Wunner, Dan Carpenter, Florian Fainelli, Ivan Khoronzhuk,
Sven Van Asbroeck, Paolo Abeni, Rob Herring, David Lechner,
Andrew Lunn, linux-doc, linux-kernel, linux-arm-kernel,
linux-i2c, linux-mtd, linux-omap, netdev, Bartosz Golaszewski
[-- Attachment #1: Type: text/plain, Size: 6159 bytes --]
On Sun, 19 Aug 2018 18:46:09 +0200
Boris Brezillon <boris.brezillon@bootlin.com> wrote:
> On Sun, 19 Aug 2018 13:31:06 +0200
> Alban <albeu@free.fr> wrote:
>
> > On Fri, 17 Aug 2018 18:27:20 +0200
> > Boris Brezillon <boris.brezillon@bootlin.com> wrote:
> >
> > > Hi Bartosz,
> > >
> > > On Fri, 10 Aug 2018 10:05:03 +0200
> > > Bartosz Golaszewski <brgl@bgdev.pl> wrote:
> > >
> > > > From: Alban Bedel <albeu@free.fr>
> > > >
> > > > Allow drivers that use the nvmem API to read data stored on MTD devices.
> > > > For this the mtd devices are registered as read-only NVMEM providers.
> > > >
> > > > Signed-off-by: Alban Bedel <albeu@free.fr>
> > > > [Bartosz:
> > > > - use the managed variant of nvmem_register(),
> > > > - set the nvmem name]
> > > > Signed-off-by: Bartosz Golaszewski <bgolaszewski@baylibre.com>
> > >
> > > What happened to the 2 other patches of Alban's series? I'd really
> > > like the DT case to be handled/agreed on in the same patchset, but
> > > IIRC, Alban and Srinivas disagreed on how this should be represented.
> > > I hope this time we'll come to an agreement, because the MTD <-> NVMEM
> > > glue has been floating around for quite some time...
> >
> > These other patches were to fix what I consider a fundamental flaw in
> > the generic NVMEM bindings, however we couldn't agree on this point.
> > Bartosz later contacted me to take over this series and I suggested to
> > just change the MTD NVMEM binding to use a compatible string on the
> > NVMEM cells as an alternative solution to fix the clash with the old
> > style MTD partition.
> >
> > However all this has no impact on the code needed to add NVMEM support
> > to MTD, so the above patch didn't change at all.
>
> It does have an impact on the supported binding though.
> nvmem->dev.of_node is automatically assigned to mtd->dev.of_node, which
> means people will be able to define their NVMEM cells directly under
> the MTD device and reference them from other nodes (even if it's not
> documented), and as you said, it conflict with the old MTD partition
> bindings. So we'd better agree on this binding before merging this
> patch.
Unless the nvmem cell node has a compatible string, then it won't be
considered as a partition by the MTD code. That is were the clash is,
both bindings allow free named child nodes without a compatible string.
> I see several options:
>
> 1/ provide a way to tell the NVMEM framework not to use parent->of_node
> even if it's != NULL. This way we really don't support defining
> NVMEM cells in the DT, and also don't support referencing the nvmem
> device using a phandle.
I really don't get what the point of this would be. Make the whole API
useless?
> 2/ define a new binding where all nvmem-cells are placed in an
> "nvmem" subnode (just like we have this "partitions" subnode for
> partitions), and then add a config->of_node field so that the
> nvmem provider can explicitly specify the DT node representing the
> nvmem device. We'll also need to set this field to ERR_PTR(-ENOENT)
> in case this node does not exist so that the nvmem framework knows
> that it should not assign nvmem->dev.of_node to parent->of_node
This is not good. First the NVMEM device is only a virtual concept of
the Linux kernel, it has no place in the DT. Secondly the NVMEM
provider (here the MTD device) then has to manually parse its DT node to
find this subnode, pass it to the NVMEM framework to later again
resolve it back to the MTD device. Not very complex but still a lot of
useless code, just registering the MTD device is a lot simpler and much
more inline with most other kernel API that register a "service"
available from a device.
> 3/ only declare partitions as nvmem providers. This would solve the
> problem we have with partitions defined in the DT since
> defining sub-partitions in the DT is not (yet?) supported and
> partition nodes are supposed to be leaf nodes. Still, I'm not a big
> fan of this solution because it will prevent us from supporting
> sub-partitions if we ever want/need to.
That sound like a poor workaround. Remember that this problem could
appear with any device that has a binding that use child nodes.
> 4/ Add a ->of_xlate() hook that would be called if present by the
> framework instead of using the default parsing we have right now.
That is a bit cleaner, but I don't think it would be worse the
complexity. Furthermore xlate functions are more about converting
from hardware parameters to internal kernel representation than to hide
extra DT parsing.
> 5/ Tell the nvmem framework the name of the subnode containing nvmem
> cell definitions (if NULL that means cells are directly defined
> under the nvmem provider node). We would set it to "nvmem-cells" (or
> whatever you like) for the MTD case.
If so please match on compatible and not on the node name.
6/ Extend the current NVMEM cell lookup to check if the parent node of
the cell has a compatible string set to "nvmem-cells". If it doesn't it
mean we have the current binding and this node is the NVMEM device. If
it does the device node is just the next parent. This is trivial to
implement (literally 2 lines of code) and cover all the cases currently
known.
7/ Just add a compatible string to the nvmem cell. No code change is
needed, however as the nvmem cells have an address space (the offset in
byte in the storage) it might still clash with another address space
used by the main device biding (for example a number of child
functions).
> There are probably other options (some were proposed by Alban and
> Srinivas already), but I'd like to get this sorted out before we merge
> this patch.
>
> Alban, Srinivas, any opinion?
My preference goes to 6/ as it is trivial to implement, solves all
known shortcomings and is backward compatible with the current binding.
All other solutions have limitations and/or require too complex
implementations compared to what they try to solve.
Alban
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]^permalinkrawreply [flat|nested] 92+ messages in thread

*Re: [PATCH v2 06/29] mtd: Add support for reading MTD devices via the nvmem API
2018-08-20 22:53 ` Alban@ 2018-08-21 5:44 ` Boris Brezillon
2018-08-21 9:38 ` Srinivas Kandagatla
2018-08-21 12:27 ` Alban0 siblings, 2 replies; 92+ messages in thread
From: Boris Brezillon @ 2018-08-21 5:44 UTC (permalink / raw)
To: Alban
Cc: Srinivas Kandagatla, Bartosz Golaszewski, Jonathan Corbet,
Sekhar Nori, Kevin Hilman, Russell King, Arnd Bergmann,
Greg Kroah-Hartman, David Woodhouse, Brian Norris, Marek Vasut,
Richard Weinberger, Grygorii Strashko, David S . Miller, Naren,
Mauro Carvalho Chehab, Andrew Morton, Lukas Wunner,
Dan Carpenter, Florian Fainelli, Ivan Khoronzhuk,
Sven Van Asbroeck, Paolo Abeni, Rob Herring, David Lechner,
Andrew Lunn, linux-doc, linux-kernel, linux-arm-kernel,
linux-i2c, linux-mtd, linux-omap, netdev, Bartosz Golaszewski
On Tue, 21 Aug 2018 00:53:27 +0200
Alban <albeu@free.fr> wrote:
> On Sun, 19 Aug 2018 18:46:09 +0200
> Boris Brezillon <boris.brezillon@bootlin.com> wrote:
>
> > On Sun, 19 Aug 2018 13:31:06 +0200
> > Alban <albeu@free.fr> wrote:
> >
> > > On Fri, 17 Aug 2018 18:27:20 +0200
> > > Boris Brezillon <boris.brezillon@bootlin.com> wrote:
> > >
> > > > Hi Bartosz,
> > > >
> > > > On Fri, 10 Aug 2018 10:05:03 +0200
> > > > Bartosz Golaszewski <brgl@bgdev.pl> wrote:
> > > >
> > > > > From: Alban Bedel <albeu@free.fr>
> > > > >
> > > > > Allow drivers that use the nvmem API to read data stored on MTD devices.
> > > > > For this the mtd devices are registered as read-only NVMEM providers.
> > > > >
> > > > > Signed-off-by: Alban Bedel <albeu@free.fr>
> > > > > [Bartosz:
> > > > > - use the managed variant of nvmem_register(),
> > > > > - set the nvmem name]
> > > > > Signed-off-by: Bartosz Golaszewski <bgolaszewski@baylibre.com>
> > > >
> > > > What happened to the 2 other patches of Alban's series? I'd really
> > > > like the DT case to be handled/agreed on in the same patchset, but
> > > > IIRC, Alban and Srinivas disagreed on how this should be represented.
> > > > I hope this time we'll come to an agreement, because the MTD <-> NVMEM
> > > > glue has been floating around for quite some time...
> > >
> > > These other patches were to fix what I consider a fundamental flaw in
> > > the generic NVMEM bindings, however we couldn't agree on this point.
> > > Bartosz later contacted me to take over this series and I suggested to
> > > just change the MTD NVMEM binding to use a compatible string on the
> > > NVMEM cells as an alternative solution to fix the clash with the old
> > > style MTD partition.
> > >
> > > However all this has no impact on the code needed to add NVMEM support
> > > to MTD, so the above patch didn't change at all.
> >
> > It does have an impact on the supported binding though.
> > nvmem->dev.of_node is automatically assigned to mtd->dev.of_node, which
> > means people will be able to define their NVMEM cells directly under
> > the MTD device and reference them from other nodes (even if it's not
> > documented), and as you said, it conflict with the old MTD partition
> > bindings. So we'd better agree on this binding before merging this
> > patch.
>
> Unless the nvmem cell node has a compatible string, then it won't be
> considered as a partition by the MTD code. That is were the clash is,
> both bindings allow free named child nodes without a compatible string.
Except the current nvmem cells parsing code does not enforce that, and
existing DTs rely on this behavior, so we're screwed. Or are you
suggesting to add a new "bool check_cells_compat;" field to
nvmem_config?
>
> > I see several options:
> >
> > 1/ provide a way to tell the NVMEM framework not to use parent->of_node
> > even if it's != NULL. This way we really don't support defining
> > NVMEM cells in the DT, and also don't support referencing the nvmem
> > device using a phandle.
>
> I really don't get what the point of this would be. Make the whole API
> useless?
No, just allow Bartosz to get his changes merged without waiting for you
and Srinivas to agree on how to handle the new binding. As I said
earlier, this mtd <-> nvmem stuff has been around for quite some time,
and instead of trying to find an approach that makes everyone happy, you
decided to let the patchset die.
>
> > 2/ define a new binding where all nvmem-cells are placed in an
> > "nvmem" subnode (just like we have this "partitions" subnode for
> > partitions), and then add a config->of_node field so that the
> > nvmem provider can explicitly specify the DT node representing the
> > nvmem device. We'll also need to set this field to ERR_PTR(-ENOENT)
> > in case this node does not exist so that the nvmem framework knows
> > that it should not assign nvmem->dev.of_node to parent->of_node
>
> This is not good. First the NVMEM device is only a virtual concept of
> the Linux kernel, it has no place in the DT.
nvmem-cells is a virtual concept too, still, you define them in the DT.
> Secondly the NVMEM
> provider (here the MTD device) then has to manually parse its DT node to
> find this subnode, pass it to the NVMEM framework to later again
> resolve it back to the MTD device.
We don't resolve it back to the MTD device, because the MTD device is
just the parent of the nvmem device.
> Not very complex but still a lot of
> useless code, just registering the MTD device is a lot simpler and much
> more inline with most other kernel API that register a "service"
> available from a device.
I'm not a big fan of this option either, but I thought I had to propose
it.
>
> > 3/ only declare partitions as nvmem providers. This would solve the
> > problem we have with partitions defined in the DT since
> > defining sub-partitions in the DT is not (yet?) supported and
> > partition nodes are supposed to be leaf nodes. Still, I'm not a big
> > fan of this solution because it will prevent us from supporting
> > sub-partitions if we ever want/need to.
>
> That sound like a poor workaround.
Yes, that's a workaround. And the reason I propose it, is, again,
because I don't want to block Bartosz.
> Remember that this problem could
> appear with any device that has a binding that use child nodes.
I'm talking about partitions, and you're talking about mtd devices.
Right now partitions don't have subnodes, and if we define that
partition subnodes should describe nvmem-cells, then it becomes part of
the official binding. So, no, the problem you mention does not (yet)
exist.
>
> > 4/ Add a ->of_xlate() hook that would be called if present by the
> > framework instead of using the default parsing we have right now.
>
> That is a bit cleaner, but I don't think it would be worse the
> complexity.
But it's way more flexible than putting everything in the nvmem
framework. BTW, did you notice that nvmem-cells parsing does not work
with flashes bigger than 4GB, because the framework assumes
#address-cells and #size-cells are always 1. That's probably something
we'll have to fix for the MTD case.
> Furthermore xlate functions are more about converting
> from hardware parameters to internal kernel representation than to hide
> extra DT parsing.
Hm, how is that different? ->of_xlate() is just a way for drivers to
have their own DT representation, which is exactly what we want here.
>
> > 5/ Tell the nvmem framework the name of the subnode containing nvmem
> > cell definitions (if NULL that means cells are directly defined
> > under the nvmem provider node). We would set it to "nvmem-cells" (or
> > whatever you like) for the MTD case.
>
> If so please match on compatible and not on the node name.
If you like.
>
> 6/ Extend the current NVMEM cell lookup to check if the parent node of
> the cell has a compatible string set to "nvmem-cells". If it doesn't it
> mean we have the current binding and this node is the NVMEM device. If
> it does the device node is just the next parent. This is trivial to
> implement (literally 2 lines of code) and cover all the cases currently
> known.
Except Srinivas was not happy with this solution, and this stalled the
discussion. I'm trying to find other options and you keep rejecting all
of them to come back to this one.
>
> 7/ Just add a compatible string to the nvmem cell. No code change is
> needed,
That's not true!!! What forces people to add this compatible in their
DT? Nothing. I'll tell you what will happen: people will start defining
their nvmem cells directly under the MTD node because that *works*, and
even if the binding is not documented and we consider it invalid, we'll
be stuck supporting it forever. As said above, the very reason for
option #1 to exist is to give you and Srinivas some more time to sort
this out, while unblocking Bartosz in the meantime.
> however as the nvmem cells have an address space (the offset in
> byte in the storage) it might still clash with another address space
> used by the main device biding (for example a number of child
> functions).
>
> > There are probably other options (some were proposed by Alban and
> > Srinivas already), but I'd like to get this sorted out before we merge
> > this patch.
> >
> > Alban, Srinivas, any opinion?
>
> My preference goes to 6/ as it is trivial to implement, solves all
> known shortcomings and is backward compatible with the current binding.
> All other solutions have limitations and/or require too complex
> implementations compared to what they try to solve.
So we're back to square 1, and you're again blocking everything because
you refuse to consider other options.
There's obviously nothing more I can do to help, and that's unfortunate
because other people are waiting for this feature.
Regards,
Boris
^permalinkrawreply [flat|nested] 92+ messages in thread

*Re: [PATCH v2 06/29] mtd: Add support for reading MTD devices via the nvmem API
2018-08-21 5:44 ` Boris Brezillon
2018-08-21 9:38 ` Srinivas Kandagatla@ 2018-08-21 12:27 ` Alban
2018-08-21 12:57 ` Boris Brezillon1 sibling, 1 reply; 92+ messages in thread
From: Alban @ 2018-08-21 12:27 UTC (permalink / raw)
To: Boris Brezillon
Cc: Alban Bedel, Srinivas Kandagatla, Bartosz Golaszewski,
Jonathan Corbet, Sekhar Nori, Kevin Hilman, Russell King,
Arnd Bergmann, Greg Kroah-Hartman, David Woodhouse, Brian Norris,
Marek Vasut, Richard Weinberger, Grygorii Strashko,
David S . Miller, Naren, Mauro Carvalho Chehab, Andrew Morton,
Lukas Wunner, Dan Carpenter, Florian Fainelli, Ivan Khoronzhuk,
Sven Van Asbroeck, Paolo Abeni, Rob Herring, David Lechner,
Andrew Lunn, linux-doc, linux-kernel, linux-arm-kernel,
linux-i2c, linux-mtd, linux-omap, netdev, Bartosz Golaszewski
[-- Attachment #1: Type: text/plain, Size: 11338 bytes --]
On Tue, 21 Aug 2018 07:44:04 +0200
Boris Brezillon <boris.brezillon@bootlin.com> wrote:
> On Tue, 21 Aug 2018 00:53:27 +0200
> Alban <albeu@free.fr> wrote:
>
> > On Sun, 19 Aug 2018 18:46:09 +0200
> > Boris Brezillon <boris.brezillon@bootlin.com> wrote:
> >
> > > On Sun, 19 Aug 2018 13:31:06 +0200
> > > Alban <albeu@free.fr> wrote:
> > >
> > > > On Fri, 17 Aug 2018 18:27:20 +0200
> > > > Boris Brezillon <boris.brezillon@bootlin.com> wrote:
> > > >
> > > > > Hi Bartosz,
> > > > >
> > > > > On Fri, 10 Aug 2018 10:05:03 +0200
> > > > > Bartosz Golaszewski <brgl@bgdev.pl> wrote:
> > > > >
> > > > > > From: Alban Bedel <albeu@free.fr>
> > > > > >
> > > > > > Allow drivers that use the nvmem API to read data stored on
> > > > > > MTD devices. For this the mtd devices are registered as
> > > > > > read-only NVMEM providers.
> > > > > >
> > > > > > Signed-off-by: Alban Bedel <albeu@free.fr>
> > > > > > [Bartosz:
> > > > > > - use the managed variant of nvmem_register(),
> > > > > > - set the nvmem name]
> > > > > > Signed-off-by: Bartosz Golaszewski
> > > > > > <bgolaszewski@baylibre.com>
> > > > >
> > > > > What happened to the 2 other patches of Alban's series? I'd
> > > > > really like the DT case to be handled/agreed on in the same
> > > > > patchset, but IIRC, Alban and Srinivas disagreed on how this
> > > > > should be represented. I hope this time we'll come to an
> > > > > agreement, because the MTD <-> NVMEM glue has been floating
> > > > > around for quite some time...
> > > >
> > > > These other patches were to fix what I consider a fundamental
> > > > flaw in the generic NVMEM bindings, however we couldn't agree
> > > > on this point. Bartosz later contacted me to take over this
> > > > series and I suggested to just change the MTD NVMEM binding to
> > > > use a compatible string on the NVMEM cells as an alternative
> > > > solution to fix the clash with the old style MTD partition.
> > > >
> > > > However all this has no impact on the code needed to add NVMEM
> > > > support to MTD, so the above patch didn't change at all.
> > >
> > > It does have an impact on the supported binding though.
> > > nvmem->dev.of_node is automatically assigned to mtd->dev.of_node,
> > > which means people will be able to define their NVMEM cells
> > > directly under the MTD device and reference them from other nodes
> > > (even if it's not documented), and as you said, it conflict with
> > > the old MTD partition bindings. So we'd better agree on this
> > > binding before merging this patch.
> >
> > Unless the nvmem cell node has a compatible string, then it won't be
> > considered as a partition by the MTD code. That is were the clash
> > is, both bindings allow free named child nodes without a compatible
> > string.
>
> Except the current nvmem cells parsing code does not enforce that, and
> existing DTs rely on this behavior, so we're screwed. Or are you
> suggesting to add a new "bool check_cells_compat;" field to
> nvmem_config?
There is no nvmem cell parsing at the moment. The DT lookup just
resolve the phandle to the cell node, take the parent node and search
for the nvmem provider that has this OF node. So extending it in case
the node has a *new* compatible string would not break users of the old
binding, none of them has a compatible string.
> >
> > > I see several options:
> > >
> > > 1/ provide a way to tell the NVMEM framework not to use
> > > parent->of_node even if it's != NULL. This way we really don't
> > > support defining NVMEM cells in the DT, and also don't support
> > > referencing the nvmem device using a phandle.
> >
> > I really don't get what the point of this would be. Make the whole
> > API useless?
>
> No, just allow Bartosz to get his changes merged without waiting for
> you and Srinivas to agree on how to handle the new binding. As I said
> earlier, this mtd <-> nvmem stuff has been around for quite some time,
> and instead of trying to find an approach that makes everyone happy,
> you decided to let the patchset die.
As long as that wouldn't prevent using DT in the future I'm fine with
it.
> >
> > > 2/ define a new binding where all nvmem-cells are placed in an
> > > "nvmem" subnode (just like we have this "partitions" subnode
> > > for partitions), and then add a config->of_node field so that the
> > > nvmem provider can explicitly specify the DT node representing
> > > the nvmem device. We'll also need to set this field to
> > > ERR_PTR(-ENOENT) in case this node does not exist so that the
> > > nvmem framework knows that it should not assign
> > > nvmem->dev.of_node to parent->of_node
> >
> > This is not good. First the NVMEM device is only a virtual concept
> > of the Linux kernel, it has no place in the DT.
>
> nvmem-cells is a virtual concept too, still, you define them in the
> DT.
To be honest I also think that naming this concept "nvmem" in the DT was
a bad idea. Perhaps something like "driver-data" or "data-cell" would
have been better as that would make it clear what this is about, nvmem
is just the Linux implementation of this concept.
> > Secondly the NVMEM
> > provider (here the MTD device) then has to manually parse its DT
> > node to find this subnode, pass it to the NVMEM framework to later
> > again resolve it back to the MTD device.
>
> We don't resolve it back to the MTD device, because the MTD device is
> just the parent of the nvmem device.
>
> > Not very complex but still a lot of
> > useless code, just registering the MTD device is a lot simpler and
> > much more inline with most other kernel API that register a
> > "service" available from a device.
>
> I'm not a big fan of this option either, but I thought I had to
> propose it.
>
> >
> > > 3/ only declare partitions as nvmem providers. This would solve
> > > the problem we have with partitions defined in the DT since
> > > defining sub-partitions in the DT is not (yet?) supported and
> > > partition nodes are supposed to be leaf nodes. Still, I'm not
> > > a big fan of this solution because it will prevent us from
> > > supporting sub-partitions if we ever want/need to.
> >
> > That sound like a poor workaround.
>
> Yes, that's a workaround. And the reason I propose it, is, again,
> because I don't want to block Bartosz.
>
> > Remember that this problem could
> > appear with any device that has a binding that use child nodes.
>
> I'm talking about partitions, and you're talking about mtd devices.
> Right now partitions don't have subnodes, and if we define that
> partition subnodes should describe nvmem-cells, then it becomes part
> of the official binding. So, no, the problem you mention does not
> (yet) exist.
That would add another binding that allow free named child nodes
without compatible string although experience has repeatedly shown that
this was a bad idea.
> >
> > > 4/ Add a ->of_xlate() hook that would be called if present by the
> > > framework instead of using the default parsing we have right
> > > now.
> >
> > That is a bit cleaner, but I don't think it would be worse the
> > complexity.
>
> But it's way more flexible than putting everything in the nvmem
> framework. BTW, did you notice that nvmem-cells parsing does not work
> with flashes bigger than 4GB, because the framework assumes
> #address-cells and #size-cells are always 1. That's probably something
> we'll have to fix for the MTD case.
Yes, however that's just an implementation limitation which is trivial
to solve.
> > Furthermore xlate functions are more about converting
> > from hardware parameters to internal kernel representation than to
> > hide extra DT parsing.
>
> Hm, how is that different? ->of_xlate() is just a way for drivers to
> have their own DT representation, which is exactly what we want here.
There is a big difference. DT represent the hardware and the
relationship between the devices in an OS independent format. We don't
add extra stuff in there just to map back internal Linux API details.
> >
> > > 5/ Tell the nvmem framework the name of the subnode containing
> > > nvmem cell definitions (if NULL that means cells are directly
> > > defined under the nvmem provider node). We would set it to
> > > "nvmem-cells" (or whatever you like) for the MTD case.
> >
> > If so please match on compatible and not on the node name.
>
> If you like.
>
> >
> > 6/ Extend the current NVMEM cell lookup to check if the parent node
> > of the cell has a compatible string set to "nvmem-cells". If it
> > doesn't it mean we have the current binding and this node is the
> > NVMEM device. If it does the device node is just the next parent.
> > This is trivial to implement (literally 2 lines of code) and cover
> > all the cases currently known.
>
> Except Srinivas was not happy with this solution, and this stalled the
> discussion. I'm trying to find other options and you keep rejecting
> all of them to come back to this one.
Well, I think this is the best solution :/
> >
> > 7/ Just add a compatible string to the nvmem cell. No code change is
> > needed,
>
> That's not true!!!
What is not true in this statement? The current nvmem lookup don't care
about compatible strings, so the cell lookup would just works fine. The
MTD partition parser won't consider them as a partition because of the
compatible string. Problem solved!
> What forces people to add this compatible in their
> DT? Nothing. I'll tell you what will happen: people will start
> defining their nvmem cells directly under the MTD node because that
> *works*, and even if the binding is not documented and we consider it
> invalid, we'll be stuck supporting it forever.
Do note that undocumented bindings are not allowed. DTS that use
undocumented bindings (normally) just get rejected.
> As said above, the
> very reason for option #1 to exist is to give you and Srinivas some
> more time to sort this out, while unblocking Bartosz in the meantime.
I'm fine with #1, I just didn't understood what it was useful for.
> > however as the nvmem cells have an address space (the offset in
> > byte in the storage) it might still clash with another address space
> > used by the main device biding (for example a number of child
> > functions).
> >
> > > There are probably other options (some were proposed by Alban and
> > > Srinivas already), but I'd like to get this sorted out before we
> > > merge this patch.
> > >
> > > Alban, Srinivas, any opinion?
> >
> > My preference goes to 6/ as it is trivial to implement, solves all
> > known shortcomings and is backward compatible with the current
> > binding. All other solutions have limitations and/or require too
> > complex implementations compared to what they try to solve.
>
> So we're back to square 1, and you're again blocking everything
> because you refuse to consider other options.
As I'm not a maintainer so I just can't block anything. But I won't lie
and pretend that I support a solution with known shortcomings.
Alban
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]^permalinkrawreply [flat|nested] 92+ messages in thread

*Re: [PATCH v2 06/29] mtd: Add support for reading MTD devices via the nvmem API
2018-08-21 12:27 ` Alban@ 2018-08-21 12:57 ` Boris Brezillon
2018-08-21 13:57 ` Alban0 siblings, 1 reply; 92+ messages in thread
From: Boris Brezillon @ 2018-08-21 12:57 UTC (permalink / raw)
To: Alban
Cc: Srinivas Kandagatla, Bartosz Golaszewski, Jonathan Corbet,
Sekhar Nori, Kevin Hilman, Russell King, Arnd Bergmann,
Greg Kroah-Hartman, David Woodhouse, Brian Norris, Marek Vasut,
Richard Weinberger, Grygorii Strashko, David S . Miller, Naren,
Mauro Carvalho Chehab, Andrew Morton, Lukas Wunner,
Dan Carpenter, Florian Fainelli, Ivan Khoronzhuk,
Sven Van Asbroeck, Paolo Abeni, Rob Herring, David Lechner,
Andrew Lunn, linux-doc, linux-kernel, linux-arm-kernel,
linux-i2c, linux-mtd, linux-omap, netdev, Bartosz Golaszewski
On Tue, 21 Aug 2018 14:27:16 +0200
Alban <albeu@free.fr> wrote:
> On Tue, 21 Aug 2018 07:44:04 +0200
> Boris Brezillon <boris.brezillon@bootlin.com> wrote:
>
> > On Tue, 21 Aug 2018 00:53:27 +0200
> > Alban <albeu@free.fr> wrote:
> >
> > > On Sun, 19 Aug 2018 18:46:09 +0200
> > > Boris Brezillon <boris.brezillon@bootlin.com> wrote:
> > >
> > > > On Sun, 19 Aug 2018 13:31:06 +0200
> > > > Alban <albeu@free.fr> wrote:
> > > >
> > > > > On Fri, 17 Aug 2018 18:27:20 +0200
> > > > > Boris Brezillon <boris.brezillon@bootlin.com> wrote:
> > > > >
> > > > > > Hi Bartosz,
> > > > > >
> > > > > > On Fri, 10 Aug 2018 10:05:03 +0200
> > > > > > Bartosz Golaszewski <brgl@bgdev.pl> wrote:
> > > > > >
> > > > > > > From: Alban Bedel <albeu@free.fr>
> > > > > > >
> > > > > > > Allow drivers that use the nvmem API to read data stored on
> > > > > > > MTD devices. For this the mtd devices are registered as
> > > > > > > read-only NVMEM providers.
> > > > > > >
> > > > > > > Signed-off-by: Alban Bedel <albeu@free.fr>
> > > > > > > [Bartosz:
> > > > > > > - use the managed variant of nvmem_register(),
> > > > > > > - set the nvmem name]
> > > > > > > Signed-off-by: Bartosz Golaszewski
> > > > > > > <bgolaszewski@baylibre.com>
> > > > > >
> > > > > > What happened to the 2 other patches of Alban's series? I'd
> > > > > > really like the DT case to be handled/agreed on in the same
> > > > > > patchset, but IIRC, Alban and Srinivas disagreed on how this
> > > > > > should be represented. I hope this time we'll come to an
> > > > > > agreement, because the MTD <-> NVMEM glue has been floating
> > > > > > around for quite some time...
> > > > >
> > > > > These other patches were to fix what I consider a fundamental
> > > > > flaw in the generic NVMEM bindings, however we couldn't agree
> > > > > on this point. Bartosz later contacted me to take over this
> > > > > series and I suggested to just change the MTD NVMEM binding to
> > > > > use a compatible string on the NVMEM cells as an alternative
> > > > > solution to fix the clash with the old style MTD partition.
> > > > >
> > > > > However all this has no impact on the code needed to add NVMEM
> > > > > support to MTD, so the above patch didn't change at all.
> > > >
> > > > It does have an impact on the supported binding though.
> > > > nvmem->dev.of_node is automatically assigned to mtd->dev.of_node,
> > > > which means people will be able to define their NVMEM cells
> > > > directly under the MTD device and reference them from other nodes
> > > > (even if it's not documented), and as you said, it conflict with
> > > > the old MTD partition bindings. So we'd better agree on this
> > > > binding before merging this patch.
> > >
> > > Unless the nvmem cell node has a compatible string, then it won't be
> > > considered as a partition by the MTD code. That is were the clash
> > > is, both bindings allow free named child nodes without a compatible
> > > string.
> >
> > Except the current nvmem cells parsing code does not enforce that, and
> > existing DTs rely on this behavior, so we're screwed. Or are you
> > suggesting to add a new "bool check_cells_compat;" field to
> > nvmem_config?
>
> There is no nvmem cell parsing at the moment. The DT lookup just
> resolve the phandle to the cell node, take the parent node and search
> for the nvmem provider that has this OF node. So extending it in case
> the node has a *new* compatible string would not break users of the old
> binding, none of them has a compatible string.
But we want to enforce the compat check on MTD devices, otherwise old
MTD partitions (those defined under the MTD node) will be considered as
NVMEM cells by the NVMEM framework. Hence the bool check_cells_compat
field.
>
> > >
> > > > I see several options:
> > > >
> > > > 1/ provide a way to tell the NVMEM framework not to use
> > > > parent->of_node even if it's != NULL. This way we really don't
> > > > support defining NVMEM cells in the DT, and also don't support
> > > > referencing the nvmem device using a phandle.
> > >
> > > I really don't get what the point of this would be. Make the whole
> > > API useless?
> >
> > No, just allow Bartosz to get his changes merged without waiting for
> > you and Srinivas to agree on how to handle the new binding. As I said
> > earlier, this mtd <-> nvmem stuff has been around for quite some time,
> > and instead of trying to find an approach that makes everyone happy,
> > you decided to let the patchset die.
>
> As long as that wouldn't prevent using DT in the future I'm fine with
> it.
>
> > >
> > > > 2/ define a new binding where all nvmem-cells are placed in an
> > > > "nvmem" subnode (just like we have this "partitions" subnode
> > > > for partitions), and then add a config->of_node field so that the
> > > > nvmem provider can explicitly specify the DT node representing
> > > > the nvmem device. We'll also need to set this field to
> > > > ERR_PTR(-ENOENT) in case this node does not exist so that the
> > > > nvmem framework knows that it should not assign
> > > > nvmem->dev.of_node to parent->of_node
> > >
> > > This is not good. First the NVMEM device is only a virtual concept
> > > of the Linux kernel, it has no place in the DT.
> >
> > nvmem-cells is a virtual concept too, still, you define them in the
> > DT.
>
> To be honest I also think that naming this concept "nvmem" in the DT was
> a bad idea. Perhaps something like "driver-data" or "data-cell" would
> have been better as that would make it clear what this is about, nvmem
> is just the Linux implementation of this concept.
I'm fine using a different name.
>
> > > Secondly the NVMEM
> > > provider (here the MTD device) then has to manually parse its DT
> > > node to find this subnode, pass it to the NVMEM framework to later
> > > again resolve it back to the MTD device.
> >
> > We don't resolve it back to the MTD device, because the MTD device is
> > just the parent of the nvmem device.
> >
> > > Not very complex but still a lot of
> > > useless code, just registering the MTD device is a lot simpler and
> > > much more inline with most other kernel API that register a
> > > "service" available from a device.
> >
> > I'm not a big fan of this option either, but I thought I had to
> > propose it.
> >
> > >
> > > > 3/ only declare partitions as nvmem providers. This would solve
> > > > the problem we have with partitions defined in the DT since
> > > > defining sub-partitions in the DT is not (yet?) supported and
> > > > partition nodes are supposed to be leaf nodes. Still, I'm not
> > > > a big fan of this solution because it will prevent us from
> > > > supporting sub-partitions if we ever want/need to.
> > >
> > > That sound like a poor workaround.
> >
> > Yes, that's a workaround. And the reason I propose it, is, again,
> > because I don't want to block Bartosz.
> >
> > > Remember that this problem could
> > > appear with any device that has a binding that use child nodes.
> >
> > I'm talking about partitions, and you're talking about mtd devices.
> > Right now partitions don't have subnodes, and if we define that
> > partition subnodes should describe nvmem-cells, then it becomes part
> > of the official binding. So, no, the problem you mention does not
> > (yet) exist.
>
> That would add another binding that allow free named child nodes
> without compatible string although experience has repeatedly shown that
> this was a bad idea.
Yes, I agree. Just thought it was important to have this solution in
the list, even if it's just to reject it.
>
> > >
> > > > 4/ Add a ->of_xlate() hook that would be called if present by the
> > > > framework instead of using the default parsing we have right
> > > > now.
> > >
> > > That is a bit cleaner, but I don't think it would be worse the
> > > complexity.
> >
> > But it's way more flexible than putting everything in the nvmem
> > framework. BTW, did you notice that nvmem-cells parsing does not work
> > with flashes bigger than 4GB, because the framework assumes
> > #address-cells and #size-cells are always 1. That's probably something
> > we'll have to fix for the MTD case.
>
> Yes, however that's just an implementation limitation which is trivial
> to solve.
Agree. I was just pointing it in case you hadn't noticed.
>
> > > Furthermore xlate functions are more about converting
> > > from hardware parameters to internal kernel representation than to
> > > hide extra DT parsing.
> >
> > Hm, how is that different? ->of_xlate() is just a way for drivers to
> > have their own DT representation, which is exactly what we want here.
>
> There is a big difference. DT represent the hardware and the
> relationship between the devices in an OS independent format. We don't
> add extra stuff in there just to map back internal Linux API details.
And I'm not talking about adding SW information in the DT, I'm talking
about HW specific description. We have the same solution for pinctrl
configs (it's HW/driver specific).
>
> > >
> > > > 5/ Tell the nvmem framework the name of the subnode containing
> > > > nvmem cell definitions (if NULL that means cells are directly
> > > > defined under the nvmem provider node). We would set it to
> > > > "nvmem-cells" (or whatever you like) for the MTD case.
> > >
> > > If so please match on compatible and not on the node name.
> >
> > If you like.
> >
> > >
> > > 6/ Extend the current NVMEM cell lookup to check if the parent node
> > > of the cell has a compatible string set to "nvmem-cells". If it
> > > doesn't it mean we have the current binding and this node is the
> > > NVMEM device. If it does the device node is just the next parent.
> > > This is trivial to implement (literally 2 lines of code) and cover
> > > all the cases currently known.
> >
> > Except Srinivas was not happy with this solution, and this stalled the
> > discussion. I'm trying to find other options and you keep rejecting
> > all of them to come back to this one.
>
> Well, I think this is the best solution :/
>
> > >
> > > 7/ Just add a compatible string to the nvmem cell. No code change is
> > > needed,
> >
> > That's not true!!!
>
> What is not true in this statement? The current nvmem lookup don't care
> about compatible strings, so the cell lookup would just works fine. The
> MTD partition parser won't consider them as a partition because of the
> compatible string. Problem solved!
No because partitions defined the old way (as direct subnodes of the MTD
node) will be considered as NVMEM cells by the NVMEM framework, and I
don't want that. Plus, I don't want people to start defining their
NVMEM cells and forget the compat string (which would work just fine
because the NVMEM framework doesn't care).
>
> > What forces people to add this compatible in their
> > DT? Nothing. I'll tell you what will happen: people will start
> > defining their nvmem cells directly under the MTD node because that
> > *works*, and even if the binding is not documented and we consider it
> > invalid, we'll be stuck supporting it forever.
>
> Do note that undocumented bindings are not allowed. DTS that use
> undocumented bindings (normally) just get rejected.
Except that's just in theory. In practice, if people can do something
wrong, they'll complain if you later fix the bug and break their setup.
So no, if we go for the "nvmem cells have an 'nvmem-cell' compat", then
I'd like the NVMEM framework to enforce that somehow.
^permalinkrawreply [flat|nested] 92+ messages in thread

*Re: [PATCH v2 06/29] mtd: Add support for reading MTD devices via the nvmem API
2018-08-21 12:57 ` Boris Brezillon@ 2018-08-21 13:57 ` Alban
2018-08-21 14:26 ` Boris Brezillon0 siblings, 1 reply; 92+ messages in thread
From: Alban @ 2018-08-21 13:57 UTC (permalink / raw)
To: Boris Brezillon
Cc: Alban Bedel, Srinivas Kandagatla, Bartosz Golaszewski,
Jonathan Corbet, Sekhar Nori, Kevin Hilman, Russell King,
Arnd Bergmann, Greg Kroah-Hartman, David Woodhouse, Brian Norris,
Marek Vasut, Richard Weinberger, Grygorii Strashko,
David S . Miller, Naren, Mauro Carvalho Chehab, Andrew Morton,
Lukas Wunner, Dan Carpenter, Florian Fainelli, Ivan Khoronzhuk,
Sven Van Asbroeck, Paolo Abeni, Rob Herring, David Lechner,
Andrew Lunn, linux-doc, linux-kernel, linux-arm-kernel,
linux-i2c, linux-mtd, linux-omap, netdev, Bartosz Golaszewski
[-- Attachment #1: Type: text/plain, Size: 14073 bytes --]
On Tue, 21 Aug 2018 14:57:25 +0200
Boris Brezillon <boris.brezillon@bootlin.com> wrote:
> On Tue, 21 Aug 2018 14:27:16 +0200
> Alban <albeu@free.fr> wrote:
>
> > On Tue, 21 Aug 2018 07:44:04 +0200
> > Boris Brezillon <boris.brezillon@bootlin.com> wrote:
> >
> > > On Tue, 21 Aug 2018 00:53:27 +0200
> > > Alban <albeu@free.fr> wrote:
> > >
> > > > On Sun, 19 Aug 2018 18:46:09 +0200
> > > > Boris Brezillon <boris.brezillon@bootlin.com> wrote:
> > > >
> > > > > On Sun, 19 Aug 2018 13:31:06 +0200
> > > > > Alban <albeu@free.fr> wrote:
> > > > >
> > > > > > On Fri, 17 Aug 2018 18:27:20 +0200
> > > > > > Boris Brezillon <boris.brezillon@bootlin.com> wrote:
> > > > > >
> > > > > > > Hi Bartosz,
> > > > > > >
> > > > > > > On Fri, 10 Aug 2018 10:05:03 +0200
> > > > > > > Bartosz Golaszewski <brgl@bgdev.pl> wrote:
> > > > > > >
> > > > > > > > From: Alban Bedel <albeu@free.fr>
> > > > > > > >
> > > > > > > > Allow drivers that use the nvmem API to read data
> > > > > > > > stored on MTD devices. For this the mtd devices are
> > > > > > > > registered as read-only NVMEM providers.
> > > > > > > >
> > > > > > > > Signed-off-by: Alban Bedel <albeu@free.fr>
> > > > > > > > [Bartosz:
> > > > > > > > - use the managed variant of nvmem_register(),
> > > > > > > > - set the nvmem name]
> > > > > > > > Signed-off-by: Bartosz Golaszewski
> > > > > > > > <bgolaszewski@baylibre.com>
> > > > > > >
> > > > > > > What happened to the 2 other patches of Alban's series?
> > > > > > > I'd really like the DT case to be handled/agreed on in
> > > > > > > the same patchset, but IIRC, Alban and Srinivas disagreed
> > > > > > > on how this should be represented. I hope this time we'll
> > > > > > > come to an agreement, because the MTD <-> NVMEM glue has
> > > > > > > been floating around for quite some time...
> > > > > >
> > > > > > These other patches were to fix what I consider a
> > > > > > fundamental flaw in the generic NVMEM bindings, however we
> > > > > > couldn't agree on this point. Bartosz later contacted me to
> > > > > > take over this series and I suggested to just change the
> > > > > > MTD NVMEM binding to use a compatible string on the NVMEM
> > > > > > cells as an alternative solution to fix the clash with the
> > > > > > old style MTD partition.
> > > > > >
> > > > > > However all this has no impact on the code needed to add
> > > > > > NVMEM support to MTD, so the above patch didn't change at
> > > > > > all.
> > > > >
> > > > > It does have an impact on the supported binding though.
> > > > > nvmem->dev.of_node is automatically assigned to
> > > > > mtd->dev.of_node, which means people will be able to define
> > > > > their NVMEM cells directly under the MTD device and reference
> > > > > them from other nodes (even if it's not documented), and as
> > > > > you said, it conflict with the old MTD partition bindings. So
> > > > > we'd better agree on this binding before merging this
> > > > > patch.
> > > >
> > > > Unless the nvmem cell node has a compatible string, then it
> > > > won't be considered as a partition by the MTD code. That is
> > > > were the clash is, both bindings allow free named child nodes
> > > > without a compatible string.
> > >
> > > Except the current nvmem cells parsing code does not enforce
> > > that, and existing DTs rely on this behavior, so we're screwed.
> > > Or are you suggesting to add a new "bool check_cells_compat;"
> > > field to nvmem_config?
> >
> > There is no nvmem cell parsing at the moment. The DT lookup just
> > resolve the phandle to the cell node, take the parent node and
> > search for the nvmem provider that has this OF node. So extending
> > it in case the node has a *new* compatible string would not break
> > users of the old binding, none of them has a compatible string.
>
> But we want to enforce the compat check on MTD devices, otherwise old
> MTD partitions (those defined under the MTD node) will be considered
> as NVMEM cells by the NVMEM framework. Hence the bool
> check_cells_compat field.
That would only be needed if the NVMEM framework would do "forward"
parsing, creating data structure for each NVMEM cell found under an
NVMEM provider. However currently it doesn't do that and only goes
"backward", starting by resolving a phandle pointing to a cell, then
finding the provider that the cell belongs to.
This also has the side effect that nvmem cells defined in DT don't
appear in sysfs, unlike those defined from board code.
> >
> > > >
> > > > > I see several options:
> > > > >
> > > > > 1/ provide a way to tell the NVMEM framework not to use
> > > > > parent->of_node even if it's != NULL. This way we really don't
> > > > > support defining NVMEM cells in the DT, and also don't support
> > > > > referencing the nvmem device using a phandle.
> > > >
> > > > I really don't get what the point of this would be. Make the
> > > > whole API useless?
> > >
> > > No, just allow Bartosz to get his changes merged without waiting
> > > for you and Srinivas to agree on how to handle the new binding.
> > > As I said earlier, this mtd <-> nvmem stuff has been around for
> > > quite some time, and instead of trying to find an approach that
> > > makes everyone happy, you decided to let the patchset die.
> >
> > As long as that wouldn't prevent using DT in the future I'm fine
> > with it.
> >
> > > >
> > > > > 2/ define a new binding where all nvmem-cells are placed in an
> > > > > "nvmem" subnode (just like we have this "partitions"
> > > > > subnode for partitions), and then add a config->of_node field
> > > > > so that the nvmem provider can explicitly specify the DT node
> > > > > representing the nvmem device. We'll also need to set this
> > > > > field to ERR_PTR(-ENOENT) in case this node does not exist so
> > > > > that the nvmem framework knows that it should not assign
> > > > > nvmem->dev.of_node to parent->of_node
> > > >
> > > > This is not good. First the NVMEM device is only a virtual
> > > > concept of the Linux kernel, it has no place in the DT.
> > >
> > > nvmem-cells is a virtual concept too, still, you define them in
> > > the DT.
> >
> > To be honest I also think that naming this concept "nvmem" in the
> > DT was a bad idea. Perhaps something like "driver-data" or
> > "data-cell" would have been better as that would make it clear what
> > this is about, nvmem is just the Linux implementation of this
> > concept.
>
> I'm fine using a different name.
>
> >
> > > > Secondly the NVMEM
> > > > provider (here the MTD device) then has to manually parse its DT
> > > > node to find this subnode, pass it to the NVMEM framework to
> > > > later again resolve it back to the MTD device.
> > >
> > > We don't resolve it back to the MTD device, because the MTD
> > > device is just the parent of the nvmem device.
> > >
> > > > Not very complex but still a lot of
> > > > useless code, just registering the MTD device is a lot simpler
> > > > and much more inline with most other kernel API that register a
> > > > "service" available from a device.
> > >
> > > I'm not a big fan of this option either, but I thought I had to
> > > propose it.
> > >
> > > >
> > > > > 3/ only declare partitions as nvmem providers. This would
> > > > > solve the problem we have with partitions defined in the DT
> > > > > since defining sub-partitions in the DT is not (yet?)
> > > > > supported and partition nodes are supposed to be leaf nodes.
> > > > > Still, I'm not a big fan of this solution because it will
> > > > > prevent us from supporting sub-partitions if we ever
> > > > > want/need to.
> > > >
> > > > That sound like a poor workaround.
> > >
> > > Yes, that's a workaround. And the reason I propose it, is, again,
> > > because I don't want to block Bartosz.
> > >
> > > > Remember that this problem could
> > > > appear with any device that has a binding that use child
> > > > nodes.
> > >
> > > I'm talking about partitions, and you're talking about mtd
> > > devices. Right now partitions don't have subnodes, and if we
> > > define that partition subnodes should describe nvmem-cells, then
> > > it becomes part of the official binding. So, no, the problem you
> > > mention does not (yet) exist.
> >
> > That would add another binding that allow free named child nodes
> > without compatible string although experience has repeatedly shown
> > that this was a bad idea.
>
> Yes, I agree. Just thought it was important to have this solution in
> the list, even if it's just to reject it.
>
> >
> > > >
> > > > > 4/ Add a ->of_xlate() hook that would be called if present by
> > > > > the framework instead of using the default parsing we have
> > > > > right now.
> > > >
> > > > That is a bit cleaner, but I don't think it would be worse the
> > > > complexity.
> > >
> > > But it's way more flexible than putting everything in the nvmem
> > > framework. BTW, did you notice that nvmem-cells parsing does not
> > > work with flashes bigger than 4GB, because the framework assumes
> > > #address-cells and #size-cells are always 1. That's probably
> > > something we'll have to fix for the MTD case.
> >
> > Yes, however that's just an implementation limitation which is
> > trivial to solve.
>
> Agree. I was just pointing it in case you hadn't noticed.
>
> >
> > > > Furthermore xlate functions are more about converting
> > > > from hardware parameters to internal kernel representation than
> > > > to hide extra DT parsing.
> > >
> > > Hm, how is that different? ->of_xlate() is just a way for drivers
> > > to have their own DT representation, which is exactly what we
> > > want here.
> >
> > There is a big difference. DT represent the hardware and the
> > relationship between the devices in an OS independent format. We
> > don't add extra stuff in there just to map back internal Linux API
> > details.
>
> And I'm not talking about adding SW information in the DT, I'm talking
> about HW specific description. We have the same solution for pinctrl
> configs (it's HW/driver specific).
For pinctrl I do understand, these beast can be very different from SoC
to SoC, having a single biding for all doesn't make much sense.
However here we are talking about a simple linear storage, nothing
special at all. I could see the need for an xlate to for example
support a device with several partitions, but not to just allow each
driver to have slightly incompatible bindings.
> >
> > > >
> > > > > 5/ Tell the nvmem framework the name of the subnode containing
> > > > > nvmem cell definitions (if NULL that means cells are directly
> > > > > defined under the nvmem provider node). We would set it to
> > > > > "nvmem-cells" (or whatever you like) for the MTD case.
> > > >
> > > > If so please match on compatible and not on the node name.
> > >
> > > If you like.
> > >
> > > >
> > > > 6/ Extend the current NVMEM cell lookup to check if the parent
> > > > node of the cell has a compatible string set to "nvmem-cells".
> > > > If it doesn't it mean we have the current binding and this node
> > > > is the NVMEM device. If it does the device node is just the
> > > > next parent. This is trivial to implement (literally 2 lines of
> > > > code) and cover all the cases currently known.
> > >
> > > Except Srinivas was not happy with this solution, and this
> > > stalled the discussion. I'm trying to find other options and you
> > > keep rejecting all of them to come back to this one.
> >
> > Well, I think this is the best solution :/
> >
> > > >
> > > > 7/ Just add a compatible string to the nvmem cell. No code
> > > > change is needed,
> > >
> > > That's not true!!!
> >
> > What is not true in this statement? The current nvmem lookup don't
> > care about compatible strings, so the cell lookup would just works
> > fine. The MTD partition parser won't consider them as a partition
> > because of the compatible string. Problem solved!
>
> No because partitions defined the old way (as direct subnodes of the
> MTD node) will be considered as NVMEM cells by the NVMEM framework,
> and I don't want that.
As I explained above that is not currently the case. If the NVMEM,
framework is ever changed to explicitly parse NVMEM cells in advance
we can first update the few existing users to add the compatible string.
> Plus, I don't want people to start defining their NVMEM cells and
> forget the compat string (which would work just fine because the
> NVMEM framework doesn't care).
A review of a new DTS should check that it use each binding correctly,
AFAIK the DT people do that. We could also add a warning when there is
no compatible string, that would also help pushing people to update
their DTS.
> >
> > > What forces people to add this compatible in their
> > > DT? Nothing. I'll tell you what will happen: people will start
> > > defining their nvmem cells directly under the MTD node because
> > > that *works*, and even if the binding is not documented and we
> > > consider it invalid, we'll be stuck supporting it forever.
> >
> > Do note that undocumented bindings are not allowed. DTS that use
> > undocumented bindings (normally) just get rejected.
>
> Except that's just in theory. In practice, if people can do something
> wrong, they'll complain if you later fix the bug and break their
> setup. So no, if we go for the "nvmem cells have an 'nvmem-cell'
> compat", then I'd like the NVMEM framework to enforce that somehow.
That should be trivial to implement.
Alban
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]^permalinkrawreply [flat|nested] 92+ messages in thread

*Re: [PATCH v2 06/29] mtd: Add support for reading MTD devices via the nvmem API
2018-08-21 13:57 ` Alban@ 2018-08-21 14:26 ` Boris Brezillon
2018-08-21 14:33 ` Srinivas Kandagatla0 siblings, 1 reply; 92+ messages in thread
From: Boris Brezillon @ 2018-08-21 14:26 UTC (permalink / raw)
To: Alban
Cc: Srinivas Kandagatla, Bartosz Golaszewski, Jonathan Corbet,
Sekhar Nori, Kevin Hilman, Russell King, Arnd Bergmann,
Greg Kroah-Hartman, David Woodhouse, Brian Norris, Marek Vasut,
Richard Weinberger, Grygorii Strashko, David S . Miller, Naren,
Mauro Carvalho Chehab, Andrew Morton, Lukas Wunner,
Dan Carpenter, Florian Fainelli, Ivan Khoronzhuk,
Sven Van Asbroeck, Paolo Abeni, Rob Herring, David Lechner,
Andrew Lunn, linux-doc, linux-kernel, linux-arm-kernel,
linux-i2c, linux-mtd, linux-omap, netdev, Bartosz Golaszewski
On Tue, 21 Aug 2018 15:57:06 +0200
Alban <albeu@free.fr> wrote:
>
> That would only be needed if the NVMEM framework would do "forward"
> parsing, creating data structure for each NVMEM cell found under an
> NVMEM provider. However currently it doesn't do that and only goes
> "backward", starting by resolving a phandle pointing to a cell, then
> finding the provider that the cell belongs to.
Yes, I missed that when briefly looking at the code.
>
> This also has the side effect that nvmem cells defined in DT don't
> appear in sysfs, unlike those defined from board code.
Wow, that's not good. I guess we'll want to make that consistent at
some point.
> > > > > Furthermore xlate functions are more about converting
> > > > > from hardware parameters to internal kernel representation than
> > > > > to hide extra DT parsing.
> > > >
> > > > Hm, how is that different? ->of_xlate() is just a way for drivers
> > > > to have their own DT representation, which is exactly what we
> > > > want here.
> > >
> > > There is a big difference. DT represent the hardware and the
> > > relationship between the devices in an OS independent format. We
> > > don't add extra stuff in there just to map back internal Linux API
> > > details.
> >
> > And I'm not talking about adding SW information in the DT, I'm talking
> > about HW specific description. We have the same solution for pinctrl
> > configs (it's HW/driver specific).
>
> For pinctrl I do understand, these beast can be very different from SoC
> to SoC, having a single biding for all doesn't make much sense.
>
> However here we are talking about a simple linear storage, nothing
> special at all. I could see the need for an xlate to for example
> support a device with several partitions, but not to just allow each
> driver to have slightly incompatible bindings.
Maybe, but I guess that's up to the subsystem maintainer to decide what
he prefers.
> >
> > No because partitions defined the old way (as direct subnodes of the
> > MTD node) will be considered as NVMEM cells by the NVMEM framework,
> > and I don't want that.
>
> As I explained above that is not currently the case. If the NVMEM,
> framework is ever changed to explicitly parse NVMEM cells in advance
> we can first update the few existing users to add the compatible string.
We're supposed to be backward compatible (compatible with old DTs), so
that's not an option, though we could add a way to check the compat
string afterwards.
>
> > Plus, I don't want people to start defining their NVMEM cells and
> > forget the compat string (which would work just fine because the
> > NVMEM framework doesn't care).
>
> A review of a new DTS should check that it use each binding correctly,
> AFAIK the DT people do that. We could also add a warning when there is
> no compatible string, that would also help pushing people to update
> their DTS.
Yes, but I'd still prefer if we were preventing people from referencing
mtd-nvmem cells if the node does not have an "nvmem-cell" compat.
>
> > >
> > > > What forces people to add this compatible in their
> > > > DT? Nothing. I'll tell you what will happen: people will start
> > > > defining their nvmem cells directly under the MTD node because
> > > > that *works*, and even if the binding is not documented and we
> > > > consider it invalid, we'll be stuck supporting it forever.
> > >
> > > Do note that undocumented bindings are not allowed. DTS that use
> > > undocumented bindings (normally) just get rejected.
> >
> > Except that's just in theory. In practice, if people can do something
> > wrong, they'll complain if you later fix the bug and break their
> > setup. So no, if we go for the "nvmem cells have an 'nvmem-cell'
> > compat", then I'd like the NVMEM framework to enforce that somehow.
>
> That should be trivial to implement.
Exactly, and that's why I'm insisting on this point.
^permalinkrawreply [flat|nested] 92+ messages in thread