mbox series

[v5,00/12] Add persistent durable identifier to storage log messages

Message ID 20200925161929.1136806-1-tasleson@redhat.com
Headers show
Series Add persistent durable identifier to storage log messages | expand

Message

Tony Asleson Sept. 25, 2020, 4:19 p.m. UTC
Today users have no easy way to correlate kernel log messages for storage
devices across reboots, device dynamic add/remove, or when the device is
physically or logically moved from from system to system.  This is due
to the existing log IDs which identify how the device is attached and not
a unique ID of what is attached.  Additionally, even when the attachment
hasn't changed, it's not always obvious which messages belong to the
device as the different areas in the storage stack use different
identifiers, eg. (sda, sata1.00, sd 0:0:0:0).

This change addresses this by adding a unique ID to each log
message.  It couples the existing structured key/value logging capability
and VPD 0x83 device identification.  The structured key/value data is not
visible in normal viewing and is not seen in the dmesg output or journal
output unless you go looking for it by dumping the output as JSON.

Some examples of logs filtered for a specific device utilizing this patch
series.

$ journalctl -b  _KERNEL_DURABLE_NAME="`cat /sys/block/sdb/device/wwid`" 
| cut -c 25- | fmt -t
l: scsi 1:0:0:0: Attached scsi generic sg1 type 0
l: sd 1:0:0:0: [sdb] 209715200 512-byte logical blocks: (107 GB/100 GiB)
l: sd 1:0:0:0: [sdb] Write Protect is off
l: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00
l: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't
   support DPO or FUA
l: sd 1:0:0:0: [sdb] Attached SCSI disk
l: ata2.00: exception Emask 0x0 SAct 0x8 SErr 0x8 action 0x6 frozen
l: ata2.00: failed command: READ FPDMA QUEUED
l: ata2.00: cmd 60/01:18:10:27:00/00:00:00:00:00/40 tag 3 ncq dma 512
            in res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
l: ata2.00: status: { DRDY }
l: ata2.00: configured for UDMA/100
l: ata2.00: device reported invalid CHS sector 0
l: ata2.00: exception Emask 0x0 SAct 0x4000 SErr 0x4000 action 0x6 frozen
l: ata2.00: failed command: READ FPDMA QUEUED
l: ata2.00: cmd 60/01:70:10:27:00/00:00:00:00:00/40 tag 14 ncq dma 512
            in res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
l: ata2.00: status: { DRDY }
l: ata2.00: configured for UDMA/100
l: ata2.00: device reported invalid CHS sector 0
l: ata2.00: exception Emask 0x0 SAct 0x80000000 SErr 0x80000000 action
            0x6 frozen
l: ata2.00: failed command: READ FPDMA QUEUED
l: ata2.00: cmd 60/01:f8:10:27:00/00:00:00:00:00/40 tag 31 ncq dma 512
            in res 40/00:ff:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
l: ata2.00: status: { DRDY }
l: ata2.00: configured for UDMA/100
l: ata2.00: NCQ disabled due to excessive errors
l: ata2.00: exception Emask 0x0 SAct 0x40000 SErr 0x40000 action 0x6
            frozen
l: ata2.00: failed command: READ FPDMA QUEUED
l: ata2.00: cmd 60/01:90:10:27:00/00:00:00:00:00/40 tag 18 ncq dma 512
            in res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
l: ata2.00: status: { DRDY }
l: ata2.00: configured for UDMA/100

$ journalctl -b  _KERNEL_DURABLE_NAME="`cat /sys/block/nvme0n1/wwid`" 
| cut -c 25- | fmt -t
l: blk_update_request: critical medium error, dev nvme0n1, sector 10000
   op 0x0:(READ) flags 0x80700 phys_seg 4 prio class 0
l: blk_update_request: critical medium error, dev nvme0n1, sector 10000
   op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
l: Buffer I/O error on dev nvme0n1, logical block 1250, async page read

$ journalctl -b  _KERNEL_DURABLE_NAME="`cat /sys/block/sdc/device/wwid`"
| cut -c 25- | fmt -t
l: sd 8:0:0:0: Power-on or device reset occurred
l: sd 8:0:0:0: [sdc] 16777216 512-byte logical blocks: (8.59 GB/8.00 GiB)
l: sd 8:0:0:0: Attached scsi generic sg2 type 0
l: sd 8:0:0:0: [sdc] Write Protect is off
l: sd 8:0:0:0: [sdc] Mode Sense: 63 00 00 08
l: sd 8:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't
   support DPO or FUA
l: sd 8:0:0:0: [sdc] Attached SCSI disk
l: sd 8:0:0:0: [sdc] tag#255 FAILED Result: hostbyte=DID_OK
   driverbyte=DRIVER_SENSE cmd_age=0s
l: sd 8:0:0:0: [sdc] tag#255 Sense Key : Medium Error [current]
l: sd 8:0:0:0: [sdc] tag#255 Add. Sense: Unrecovered read error
l: sd 8:0:0:0: [sdc] tag#255 CDB: Read(10) 28 00 00 00 27 10 00 00 01 00
l: blk_update_request: critical medium error, dev sdc, sector 10000 op
   0x0:(READ) flags 0x0 phys_seg 1 prio class 0

There should be no changes to the log message content with this patch series.
I ran release kernel and this patch series and did a compare while forcing the
kernel through the same errors paths to verify.

The first 6 commits in the patch series utilize changes needed for dev_printk
code path.  The last 6 commits in the patch add the needed changes to utilize
durable_name_printk.  The function durable_name_printk is nothing more than
a printk that adds structured key/value durable name to unmodified printk
output.  I structured it this way so only a subset of the patch series could
be theoretically applied if we cannot get agreement on complete patch series.

v2:
- Incorporated changes suggested by James Bottomley
- Removed string function which removed leading/trailing/duplicate adjacent
  spaces from generated id, value matches /sys/block/<device>/device/wwid
- Remove xfs patch, limiting changes to lower block layers
- Moved callback from struct device_type to struct device.  Struct device_type
  is typically static const and with a number of different areas using shared
  implementation of genhd unable to modify for each of the different areas.

v3:
- Increase the size of the buffers for NVMe id generation and
  dev_vprintk_emit
  
v4:
- Back out dev_printk for those locations that weren't using it before, so that
  we don't change the content of the user visible log message by using a
  function durable_name_printk.
- Remove RFC from patch series.

v5:
- Reduced stack usage for nvme wwid
- Make function ata_scsi_durable_name static, found by kernel test robot
- Incorporated suggested changes from Andy Shevchenko and Sergei Shtylyov
  * Remove unneeded line spacing
  * Correct spelling
  * Remove unneeded () in conditional operator
  * Re-worked expressions to follow common kernel patterns, added
    function dev_to_scsi_device
- Re-based for v5.8 branch

Tony Asleson (12):
  struct device: Add function callback durable_name
  create_syslog_header: Add durable name
  dev_vprintk_emit: Increase hdr size
  scsi: Add durable_name for dev_printk
  nvme: Add durable name for dev_printk
  libata: Add ata_scsi_durable_name
  libata: Make ata_scsi_durable_name static
  Add durable_name_printk
  libata: use durable_name_printk
  Add durable_name_printk_ratelimited
  print_req_error: Use durable_name_printk_ratelimited
  buffer_io_error: Use durable_name_printk_ratelimited

 block/blk-core.c           |  5 ++++-
 drivers/ata/libata-core.c  | 17 +++++++-------
 drivers/ata/libata-scsi.c  | 18 ++++++++++++---
 drivers/base/core.c        | 46 +++++++++++++++++++++++++++++++++++++-
 drivers/nvme/host/core.c   | 18 +++++++++++++++
 drivers/scsi/scsi_lib.c    |  9 ++++++++
 drivers/scsi/scsi_sysfs.c  | 35 +++++++++++++++++++++++------
 drivers/scsi/sd.c          |  2 ++
 fs/buffer.c                | 15 +++++++++----
 include/linux/dev_printk.h | 14 ++++++++++++
 include/linux/device.h     |  4 ++++
 include/scsi/scsi_device.h |  3 +++
 12 files changed, 162 insertions(+), 24 deletions(-)


base-commit: bcf876870b95592b52519ed4aafcf9d95999bc9c

Comments

Randy Dunlap Sept. 26, 2020, 11:53 p.m. UTC | #1
On 9/25/20 9:19 AM, Tony Asleson wrote:
> Ideally block related code would standardize on using dev_printk,
> but dev_printk does change the user visible messages which is
> questionable.  Adding this function which adds the structured
> key/value durable name to the log entry.  It has the
> same signature as dev_printk.  In the future, code that
> is using this could easily transition to dev_printk when that
> becomes workable.
> 
> Signed-off-by: Tony Asleson <tasleson@redhat.com>
> ---
>  drivers/base/core.c        | 15 +++++++++++++++
>  include/linux/dev_printk.h |  5 +++++
>  2 files changed, 20 insertions(+)

Hi,

I suggest that these 2 new function names should be
	printk_durable_name()
and
	printk_durable_name_ratelimited()

Those names would be closer to the printk* family of
function names.  Of course, you can find exceptions to this,
like dev_printk(), but that is in the dev_*() family of
function names.


> diff --git a/drivers/base/core.c b/drivers/base/core.c
> index 72a93b041a2d..447b0ebc93af 100644
> --- a/drivers/base/core.c
> +++ b/drivers/base/core.c
> @@ -3975,6 +3975,21 @@ void dev_printk(const char *level, const struct device *dev,
>  }
>  EXPORT_SYMBOL(dev_printk);
>  
> +void durable_name_printk(const char *level, const struct device *dev,
> +		const char *fmt, ...)
> +{
> +	size_t dictlen;
> +	va_list args;
> +	char dict[288];
> +
> +	dictlen = dev_durable_name(dev, dict, sizeof(dict));
> +
> +	va_start(args, fmt);
> +	vprintk_emit(0, level[1] - '0', dict, dictlen, fmt, args);
> +	va_end(args);
> +}
> +EXPORT_SYMBOL(durable_name_printk);
> +
>  #define define_dev_printk_level(func, kern_level)		\
>  void func(const struct device *dev, const char *fmt, ...)	\
>  {								\
> diff --git a/include/linux/dev_printk.h b/include/linux/dev_printk.h
> index 3028b644b4fb..4d57b940b692 100644
> --- a/include/linux/dev_printk.h
> +++ b/include/linux/dev_printk.h
> @@ -32,6 +32,11 @@ int dev_printk_emit(int level, const struct device *dev, const char *fmt, ...);
>  __printf(3, 4) __cold
>  void dev_printk(const char *level, const struct device *dev,
>  		const char *fmt, ...);
> +
> +__printf(3, 4) __cold
> +void durable_name_printk(const char *level, const struct device *dev,
> +			const char *fmt, ...);
> +
>  __printf(2, 3) __cold
>  void _dev_emerg(const struct device *dev, const char *fmt, ...);
>  __printf(2, 3) __cold
> 

Thanks.
Tony Asleson Sept. 27, 2020, 2:22 p.m. UTC | #2
On 9/26/20 4:08 AM, Sergei Shtylyov wrote:
> On 25.09.2020 19:19, Tony Asleson wrote:
> 
>> Function callback and function to be used to write a persistent
>> durable name to the supplied character buffer.  This will be used to add
>> structured key-value data to log messages for hardware related errors
>> which allows end users to correlate message and specific hardware.
>>
>> Signed-off-by: Tony Asleson <tasleson@redhat.com>
>> ---
>>   drivers/base/core.c    | 24 ++++++++++++++++++++++++
>>   include/linux/device.h |  4 ++++
>>   2 files changed, 28 insertions(+)
>>
>> diff --git a/drivers/base/core.c b/drivers/base/core.c
>> index 05d414e9e8a4..88696ade8bfc 100644
>> --- a/drivers/base/core.c
>> +++ b/drivers/base/core.c
>> @@ -2489,6 +2489,30 @@ int dev_set_name(struct device *dev, const char
>> *fmt, ...)
>>   }
>>   EXPORT_SYMBOL_GPL(dev_set_name);
>>   +/**
>> + * dev_durable_name - Write "DURABLE_NAME"=<durable name> in buffer
>> + * @dev: device
>> + * @buffer: character buffer to write results
>> + * @len: length of buffer
>> + * @return: Number of bytes written to buffer
> 
>    This is not how the kernel-doc commenta describe the function result,
> IIRC...

I did my compile with `make  W=1` and there isn't any warnings/error
with source documentation, but the documentation does indeed outline a
different syntax.  It's interesting how common the @return syntax is in
the existing code base.

I'll re-work the function documentation return.

Thanks
Sergei Shtylyov Sept. 27, 2020, 4:15 p.m. UTC | #3
On 27.09.2020 17:22, Tony Asleson wrote:

>>> Function callback and function to be used to write a persistent

>>> durable name to the supplied character buffer.  This will be used to add

>>> structured key-value data to log messages for hardware related errors

>>> which allows end users to correlate message and specific hardware.

>>>

>>> Signed-off-by: Tony Asleson <tasleson@redhat.com>

>>> ---

>>>    drivers/base/core.c    | 24 ++++++++++++++++++++++++

>>>    include/linux/device.h |  4 ++++

>>>    2 files changed, 28 insertions(+)

>>>

>>> diff --git a/drivers/base/core.c b/drivers/base/core.c

>>> index 05d414e9e8a4..88696ade8bfc 100644

>>> --- a/drivers/base/core.c

>>> +++ b/drivers/base/core.c

>>> @@ -2489,6 +2489,30 @@ int dev_set_name(struct device *dev, const char

>>> *fmt, ...)

>>>    }

>>>    EXPORT_SYMBOL_GPL(dev_set_name);

>>>    +/**

>>> + * dev_durable_name - Write "DURABLE_NAME"=<durable name> in buffer

>>> + * @dev: device

>>> + * @buffer: character buffer to write results

>>> + * @len: length of buffer

>>> + * @return: Number of bytes written to buffer

>>

>>     This is not how the kernel-doc commenta describe the function result,

>> IIRC...

> 

> I did my compile with `make  W=1` and there isn't any warnings/error

> with source documentation, but the documentation does indeed outline a


    IIRC, you only get the warnings when you try to build the kernel-docs.

> different syntax.  It's interesting how common the @return syntax is in

> the existing code base.


    FWIW, I'm seeing @return: for the 1st time in my Linux tenure (since 2004).

> I'll re-work the function documentation return.


    OK, thanks. :-)

> Thanks


MBR, Sergei
Tony Asleson Sept. 28, 2020, 3:52 p.m. UTC | #4
On 9/26/20 6:53 PM, Randy Dunlap wrote:
> I suggest that these 2 new function names should be

> 	printk_durable_name()

> and

> 	printk_durable_name_ratelimited()

> 

> Those names would be closer to the printk* family of

> function names.  Of course, you can find exceptions to this,

> like dev_printk(), but that is in the dev_*() family of

> function names.


durable_name_printk has the same argument signature as dev_printk with
it's intention that in the future it might be a candidate to get changed
to dev_printk.  The reason I'm not using dev_printk is to avoid changing
the content of the message users see.

With this clarification, do you still suggest the rename or maybe
suggest something different?

dev_id_printk
id_printk
...

I'm also thinking that maybe we should add a new function do everything
dev_printk does, but without prepending the device driver name and
device name to the message.  So we can get the metadata adds without
having the content of the message change.

Thanks
Randy Dunlap Sept. 28, 2020, 5:32 p.m. UTC | #5
On 9/28/20 8:52 AM, Tony Asleson wrote:
> On 9/26/20 6:53 PM, Randy Dunlap wrote:

>> I suggest that these 2 new function names should be

>> 	printk_durable_name()

>> and

>> 	printk_durable_name_ratelimited()

>>

>> Those names would be closer to the printk* family of

>> function names.  Of course, you can find exceptions to this,

>> like dev_printk(), but that is in the dev_*() family of

>> function names.

> 

> durable_name_printk has the same argument signature as dev_printk with

> it's intention that in the future it might be a candidate to get changed

> to dev_printk.  The reason I'm not using dev_printk is to avoid changing

> the content of the message users see.

> 

> With this clarification, do you still suggest the rename or maybe

> suggest something different?


Since you seem to bring it up, "durable_name" is a bit long IMO.

But yes, I still prefer printk_durable_name() etc. The other order seems
backwards to me. But that's still just an opinion.


> dev_id_printk

> id_printk

> ...

> 

> I'm also thinking that maybe we should add a new function do everything

> dev_printk does, but without prepending the device driver name and

> device name to the message.  So we can get the metadata adds without

> having the content of the message change.


thanks.
-- 
~Randy
Reported-by: Randy Dunlap <rdunlap@infradead.org>
Greg Kroah-Hartman Sept. 29, 2020, 6:04 p.m. UTC | #6
On Tue, Sep 29, 2020 at 06:51:02PM +0100, Christoph Hellwig wrote:
> Independ of my opinion on the whole scheme that I shared last time,
> we really should not bloat struct device with function pointers.
> 
> On Fri, Sep 25, 2020 at 11:19:18AM -0500, Tony Asleson wrote:
> > Function callback and function to be used to write a persistent
> > durable name to the supplied character buffer.  This will be used to add
> > structured key-value data to log messages for hardware related errors
> > which allows end users to correlate message and specific hardware.
> > 
> > Signed-off-by: Tony Asleson <tasleson@redhat.com>
> > ---
> >  drivers/base/core.c    | 24 ++++++++++++++++++++++++
> >  include/linux/device.h |  4 ++++
> >  2 files changed, 28 insertions(+)

I can't find this patch anywhere in my archives, why was I not cc:ed on
it?  It's a v5 and no one thought to ask the driver core
developers/maintainers about it???

{sigh}

And for log messages, what about the dynamic debug developers, why not
include them as well?  Since when is this a storage-only thing?

> > 
> > diff --git a/drivers/base/core.c b/drivers/base/core.c
> > index 05d414e9e8a4..88696ade8bfc 100644
> > --- a/drivers/base/core.c
> > +++ b/drivers/base/core.c
> > @@ -2489,6 +2489,30 @@ int dev_set_name(struct device *dev, const char *fmt, ...)
> >  }
> >  EXPORT_SYMBOL_GPL(dev_set_name);
> >  
> > +/**
> > + * dev_durable_name - Write "DURABLE_NAME"=<durable name> in buffer
> > + * @dev: device
> > + * @buffer: character buffer to write results
> > + * @len: length of buffer
> > + * @return: Number of bytes written to buffer
> > + */
> > +int dev_durable_name(const struct device *dev, char *buffer, size_t len)
> > +{
> > +	int tmp, dlen;
> > +
> > +	if (dev && dev->durable_name) {
> > +		tmp = snprintf(buffer, len, "DURABLE_NAME=");
> > +		if (tmp < len) {
> > +			dlen = dev->durable_name(dev, buffer + tmp,
> > +							len - tmp);
> > +			if (dlen > 0 && ((dlen + tmp) < len))
> > +				return dlen + tmp;
> > +		}
> > +	}
> > +	return 0;
> > +}
> > +EXPORT_SYMBOL_GPL(dev_durable_name);
> > +
> >  /**
> >   * device_to_dev_kobj - select a /sys/dev/ directory for the device
> >   * @dev: device
> > diff --git a/include/linux/device.h b/include/linux/device.h
> > index 5efed864b387..074125999dd8 100644
> > --- a/include/linux/device.h
> > +++ b/include/linux/device.h
> > @@ -614,6 +614,8 @@ struct device {
> >  	struct iommu_group	*iommu_group;
> >  	struct dev_iommu	*iommu;
> >  
> > +	int (*durable_name)(const struct device *dev, char *buff, size_t len);

No, that's not ok at all, why is this even a thing?

Who is setting this?  Why can't the bus do this without anything
"special" needed from the driver core?

We have a mapping of 'struct device' to a unique hardware device at a
specific point in time, why are you trying to create another one?

What is wrong with what we have today?

So this is a HARD NAK on this patch for now.

thanks,

greg k-h
Tony Asleson Sept. 29, 2020, 10:04 p.m. UTC | #7
On 9/29/20 1:04 PM, Greg Kroah-Hartman wrote:
> On Tue, Sep 29, 2020 at 06:51:02PM +0100, Christoph Hellwig wrote:

>> Independ of my opinion on the whole scheme that I shared last time,

>> we really should not bloat struct device with function pointers.


Christoph, thank you for sharing a bit more of your concerns and
bringing Greg into the discussion.  It's more productive.

>>

>> On Fri, Sep 25, 2020 at 11:19:18AM -0500, Tony Asleson wrote:

>>> Function callback and function to be used to write a persistent

>>> durable name to the supplied character buffer.  This will be used to add

>>> structured key-value data to log messages for hardware related errors

>>> which allows end users to correlate message and specific hardware.

>>>

>>> Signed-off-by: Tony Asleson <tasleson@redhat.com>

>>> ---

>>>  drivers/base/core.c    | 24 ++++++++++++++++++++++++

>>>  include/linux/device.h |  4 ++++

>>>  2 files changed, 28 insertions(+)

> 

> I can't find this patch anywhere in my archives, why was I not cc:ed on

> it?  It's a v5 and no one thought to ask the driver core

> developers/maintainers about it???


You were CC'd into v3, ref.

https://lore.kernel.org/linux-ide/20200714081750.GB862637@kroah.com/

I should have continued to CC you on it, sorry about that.


> {sigh}

> 

> And for log messages, what about the dynamic debug developers, why not

> include them as well?  Since when is this a storage-only thing?


Hannes Reinecke has been involved in the discussion some and he's
involved in dynamic debug AFAIK.

If others have a need to identify a specific piece of hardware in a
potential sea of identical hardware that is encountering errors and
logging messages and can optionally be added and removed at run-time,
then yes they should be included too.  There is nothing with this patch
series that is preventing any device from registering a callback which
provides this information when asked.


>>> diff --git a/include/linux/device.h b/include/linux/device.h

>>> index 5efed864b387..074125999dd8 100644

>>> --- a/include/linux/device.h

>>> +++ b/include/linux/device.h

>>> @@ -614,6 +614,8 @@ struct device {

>>>  	struct iommu_group	*iommu_group;

>>>  	struct dev_iommu	*iommu;

>>>  

>>> +	int (*durable_name)(const struct device *dev, char *buff, size_t len);

> 

> No, that's not ok at all, why is this even a thing?

> 

> Who is setting this?  Why can't the bus do this without anything

> "special" needed from the driver core?


I'm setting it in the different storage subsystems.  The intent is that
when we go through a common logging path eg. dev_printk you can ask the
device what it's ID is so that it can be logged as structured data with
the log message.  I was trying to avoid having separate logging
functions all over the place which enforces a consistent meta data to
log messages, but maybe that would be better than adding a function
pointer to struct device?  My first patch tried adding a call back to
struct device_type, but that ran into the issue where struct device_type
is static const in a number of areas.

Basically for any piece of hardware with a serial number, call this
function to get it.  That was the intent anyway.

> We have a mapping of 'struct device' to a unique hardware device at a

> specific point in time, why are you trying to create another one?


I don't think I'm creating anything new.  Can you clarify this a bit
more?  I'm trying to leverage what is already in place.

> What is wrong with what we have today?


I'm trying to figure out a way to positively identify which storage
device an error belongs to over time.  Logging how something is attached
and not what is attached, only solves for right now, this point in time.
 Additionally when the only identifier is in the actual message itself,
it makes user space break every time the message content changes.  Also
what we have today in dev_printk doesn't tag the meta-data consistently
for journald to leverage.

This patch series adds structured data to the log entry for positive
identification.  It doesn't change when the content of the log message
changes.  It's true over time and it's true if a device gets moved to a
different system.

> So this is a HARD NAK on this patch for now.


Thank you for supplying some feedback and asking questions.  I've been
asking for suggestions and would very much like to have a discussion on
how this issue is best solved.  I'm not attached to what I've provided.
I'm just trying to get towards a solution.


We've looked at user space quite a bit and there is an inherit race
condition with trying to fetch the unique hardware id for a message when
it gets emitted from the kernel as udev rules haven't even run (assuming
we even have the meta-data to make the association).  The last thing
people want to do is delay writing the log message to disk until the
device it belongs to can be identified.  Of course this patch series
still has a window from where the device is first discovered by the
kernel and fetches the needed vpd data from the device.  Any kernel
messages logged during that time have no id to associate with it.

Thanks,
Tony
Greg Kroah-Hartman Sept. 30, 2020, 7:38 a.m. UTC | #8
On Tue, Sep 29, 2020 at 05:04:32PM -0500, Tony Asleson wrote:
> On 9/29/20 1:04 PM, Greg Kroah-Hartman wrote:

> > On Tue, Sep 29, 2020 at 06:51:02PM +0100, Christoph Hellwig wrote:

> >> Independ of my opinion on the whole scheme that I shared last time,

> >> we really should not bloat struct device with function pointers.

> 

> Christoph, thank you for sharing a bit more of your concerns and

> bringing Greg into the discussion.  It's more productive.

> 

> >>

> >> On Fri, Sep 25, 2020 at 11:19:18AM -0500, Tony Asleson wrote:

> >>> Function callback and function to be used to write a persistent

> >>> durable name to the supplied character buffer.  This will be used to add

> >>> structured key-value data to log messages for hardware related errors

> >>> which allows end users to correlate message and specific hardware.

> >>>

> >>> Signed-off-by: Tony Asleson <tasleson@redhat.com>

> >>> ---

> >>>  drivers/base/core.c    | 24 ++++++++++++++++++++++++

> >>>  include/linux/device.h |  4 ++++

> >>>  2 files changed, 28 insertions(+)

> > 

> > I can't find this patch anywhere in my archives, why was I not cc:ed on

> > it?  It's a v5 and no one thought to ask the driver core

> > developers/maintainers about it???

> 

> You were CC'd into v3, ref.

> 

> https://lore.kernel.org/linux-ide/20200714081750.GB862637@kroah.com/


Yeah, and I rejected that patch too, no wonder you didn't include me in
further patch reviews, you didn't want me to reject them either :)

> > {sigh}

> > 

> > And for log messages, what about the dynamic debug developers, why not

> > include them as well?  Since when is this a storage-only thing?

> 

> Hannes Reinecke has been involved in the discussion some and he's

> involved in dynamic debug AFAIK.


From the maintainers file:
	DYNAMIC DEBUG
	M:      Jason Baron <jbaron@akamai.com>
	S:      Maintained
	F:      include/linux/dynamic_debug.h
	F:      lib/dynamic_debug.c

Come on, you know this, don't try to avoid the people who have to
maintain the code you are wanting to change, that's not ok.

> If others have a need to identify a specific piece of hardware in a

> potential sea of identical hardware that is encountering errors and

> logging messages and can optionally be added and removed at run-time,

> then yes they should be included too.  There is nothing with this patch

> series that is preventing any device from registering a callback which

> provides this information when asked.


But that's not what the kernel is supposed to be doing, it doesn't care
about tracking things outside of the lifetime from when it can see a
device.  That's what userspace is for.

> >>> diff --git a/include/linux/device.h b/include/linux/device.h

> >>> index 5efed864b387..074125999dd8 100644

> >>> --- a/include/linux/device.h

> >>> +++ b/include/linux/device.h

> >>> @@ -614,6 +614,8 @@ struct device {

> >>>  	struct iommu_group	*iommu_group;

> >>>  	struct dev_iommu	*iommu;

> >>>  

> >>> +	int (*durable_name)(const struct device *dev, char *buff, size_t len);

> > 

> > No, that's not ok at all, why is this even a thing?

> > 

> > Who is setting this?  Why can't the bus do this without anything

> > "special" needed from the driver core?

> 

> I'm setting it in the different storage subsystems.  The intent is that

> when we go through a common logging path eg. dev_printk you can ask the

> device what it's ID is so that it can be logged as structured data with

> the log message.  I was trying to avoid having separate logging

> functions all over the place which enforces a consistent meta data to

> log messages, but maybe that would be better than adding a function

> pointer to struct device?  My first patch tried adding a call back to

> struct device_type, but that ran into the issue where struct device_type

> is static const in a number of areas.

> 

> Basically for any piece of hardware with a serial number, call this

> function to get it.  That was the intent anyway.


That's not ok, again, if you really want something crazy like this, then
modify your logging messages to provide it.  But I would strongly argue
that you do not really need this, as you have this information at
runtime, in userspace, already.

> > We have a mapping of 'struct device' to a unique hardware device at a

> > specific point in time, why are you trying to create another one?

> 

> I don't think I'm creating anything new.  Can you clarify this a bit

> more?  I'm trying to leverage what is already in place.

> 

> > What is wrong with what we have today?

> 

> I'm trying to figure out a way to positively identify which storage

> device an error belongs to over time.


"over time" is not the kernel's responsibility.

This comes up every 5 years or so. The kernel provides you, at runtime,
a mapping between a hardware device and a "logical" device.  It can
provide information to userspace about this mapping, but once that
device goes away, the kernel is free to reuse that logical device again.

If you want to track what logical devices match up to what physical
device, then do it in userspace, by parsing the log files.  All of the
needed information is already there, you don't have to add anything new
to it.

Again, this comes up every few years for some reason, because people
feel it is eaasier to modify the kernel than work with userspace
programs.  Odd...

> > So this is a HARD NAK on this patch for now.

> 

> Thank you for supplying some feedback and asking questions.  I've been

> asking for suggestions and would very much like to have a discussion on

> how this issue is best solved.  I'm not attached to what I've provided.

> I'm just trying to get towards a solution.


Again, solve this in userspace, you have the information there at
runtime, why not use it?

> We've looked at user space quite a bit and there is an inherit race

> condition with trying to fetch the unique hardware id for a message when

> it gets emitted from the kernel as udev rules haven't even run (assuming

> we even have the meta-data to make the association).


But one moment later you do have the information, so you can properly
correlate it, right?

> The last thing

> people want to do is delay writing the log message to disk until the

> device it belongs to can be identified.  Of course this patch series

> still has a window from where the device is first discovered by the

> kernel and fetches the needed vpd data from the device.  Any kernel

> messages logged during that time have no id to associate with it.


No need to delay logging, you are looking at things way later in time,
so there's no issue of race conditions or anything else.

thanks,

greg k-h
Greg Kroah-Hartman Sept. 30, 2020, 7:40 a.m. UTC | #9
On Wed, Sep 30, 2020 at 09:38:59AM +0200, Greg Kroah-Hartman wrote:
> > > {sigh}
> > > 
> > > And for log messages, what about the dynamic debug developers, why not
> > > include them as well?  Since when is this a storage-only thing?
> > 
> > Hannes Reinecke has been involved in the discussion some and he's
> > involved in dynamic debug AFAIK.
> 
> From the maintainers file:
> 	DYNAMIC DEBUG
> 	M:      Jason Baron <jbaron@akamai.com>
> 	S:      Maintained
> 	F:      include/linux/dynamic_debug.h
> 	F:      lib/dynamic_debug.c
> 
> Come on, you know this, don't try to avoid the people who have to
> maintain the code you are wanting to change, that's not ok.

Also, why are you not using scripts/get_maintainer.pl on your patches?
It would have told you about this...

greg k-h
Tony Asleson Sept. 30, 2020, 2:35 p.m. UTC | #10
On 9/30/20 2:38 AM, Greg Kroah-Hartman wrote:
> On Tue, Sep 29, 2020 at 05:04:32PM -0500, Tony Asleson wrote:

>> I'm trying to figure out a way to positively identify which storage

>> device an error belongs to over time.

> 

> "over time" is not the kernel's responsibility.

> 

> This comes up every 5 years or so. The kernel provides you, at runtime,

> a mapping between a hardware device and a "logical" device.  It can

> provide information to userspace about this mapping, but once that

> device goes away, the kernel is free to reuse that logical device again.

> 

> If you want to track what logical devices match up to what physical

> device, then do it in userspace, by parsing the log files.


I don't understand why people think it's acceptable to ask user space to
parse text that is subject to change.

>> Thank you for supplying some feedback and asking questions.  I've been

>> asking for suggestions and would very much like to have a discussion on

>> how this issue is best solved.  I'm not attached to what I've provided.

>> I'm just trying to get towards a solution.

> 

> Again, solve this in userspace, you have the information there at

> runtime, why not use it?


We usually don't have the needed information if you remove the
expectation that user space should parse the human readable portion of
the error message.

>> We've looked at user space quite a bit and there is an inherit race

>> condition with trying to fetch the unique hardware id for a message when

>> it gets emitted from the kernel as udev rules haven't even run (assuming

>> we even have the meta-data to make the association).

> 

> But one moment later you do have the information, so you can properly

> correlate it, right?


We could have the information if all the storage paths went through
dev_printk.  Here is what we get today when we encounter a read error
which uses printk in the block layer:

{
        "_HOSTNAME" : "pn",
        "_TRANSPORT" : "kernel",
        "__MONOTONIC_TIMESTAMP" : "1806379233",
        "SYSLOG_IDENTIFIER" : "kernel",
        "_SOURCE_MONOTONIC_TIMESTAMP" : "1805611354",
        "SYSLOG_FACILITY" : "0",
        "MESSAGE" : "blk_update_request: critical medium error, dev
nvme0n1, sector 10000 op 0x0:(READ) flags 0x80700 phys_seg 3 prio class 0",
        "PRIORITY" : "3",
        "_MACHINE_ID" : "3f31a0847cea4c95b7a9cec13d07deeb",
        "__REALTIME_TIMESTAMP" : "1601471260802301",
        "_BOOT_ID" : "b03ed610f21d46ab8243a495ba5a0058",
        "__CURSOR" :
"s=a063a22bbb384da0b0412e8f652deabb;i=23c2;b=b03ed610f21d46ab8243a495ba5a0058;m=6bab28e1;t=5b087959e3cfd;x=20528862f8f765c9"
}

Unless you parse the message text you cannot make the association.  If
the same message was changed to dev_printk we would get:


{
        "__REALTIME_TIMESTAMP" : "1589401901093443",
        "__CURSOR" :
"s=caac9703b34a48fd92f7875adae55a2f;i=1c713;b=e2ae14a9def345aa803a13648b95429c;m=7d25b4f;t=5a58d77b85243;x=b034c2d3fb853870",
        "SYSLOG_IDENTIFIER" : "kernel",
        "_KERNEL_DEVICE" : "b259:917504",
        "__MONOTONIC_TIMESTAMP" : "131226447",
        "_UDEV_SYSNAME" : "nvme0n1",
        "PRIORITY" : "3",
        "_KERNEL_SUBSYSTEM" : "block",
        "_SOURCE_MONOTONIC_TIMESTAMP" : "130941917",
        "_TRANSPORT" : "kernel",
        "_MACHINE_ID" : "3f31a0847cea4c95b7a9cec13d07deeb",
        "_HOSTNAME" : "pn",
        "SYSLOG_FACILITY" : "0",
        "_BOOT_ID" : "e2ae14a9def345aa803a13648b95429c",
        "_UDEV_DEVLINK" : [
                "/dev/disk/by-uuid/22fc262a-d621-452a-a951-7761d9fcf0dc",
                "/dev/disk/by-path/pci-0000:00:05.0-nvme-1",

"/dev/disk/by-id/nvme-nvme.8086-4445414442454546-51454d55204e564d65204374726c-00000001",
                "/dev/disk/by-id/nvme-QEMU_NVMe_Ctrl_DEADBEEF"
        ],
        "MESSAGE" : "block nvme0n1: blk_update_request: critical medium
error, dev nvme0n1, sector 10000 op 0x0:(READ) flags 0x0 phys_seg 1 prio
class 0",
        "_UDEV_DEVNODE" : "/dev/nvme0n1"
}


Journald already knows how to utilize the dev_printk meta data.

One idea that I've suggested along the way is creating a dev_printk
function that doesn't change the message text.  We then avoid breaking
people that are parsing.  Is this something that would be acceptable to
folks?  It doesn't solve early boot where udev rules haven't even run,
but it's better.

Thanks,
Tony
Greg Kroah-Hartman Oct. 1, 2020, 11:48 a.m. UTC | #11
On Wed, Sep 30, 2020 at 09:35:52AM -0500, Tony Asleson wrote:
> On 9/30/20 2:38 AM, Greg Kroah-Hartman wrote:
> > On Tue, Sep 29, 2020 at 05:04:32PM -0500, Tony Asleson wrote:
> >> I'm trying to figure out a way to positively identify which storage
> >> device an error belongs to over time.
> > 
> > "over time" is not the kernel's responsibility.
> > 
> > This comes up every 5 years or so. The kernel provides you, at runtime,
> > a mapping between a hardware device and a "logical" device.  It can
> > provide information to userspace about this mapping, but once that
> > device goes away, the kernel is free to reuse that logical device again.
> > 
> > If you want to track what logical devices match up to what physical
> > device, then do it in userspace, by parsing the log files.
> 
> I don't understand why people think it's acceptable to ask user space to
> parse text that is subject to change.

What text is changing?  The format of of the prefix of dev_*() is well
known and has been stable for 15+ years now, right?  What is difficult
in parsing it?

> >> Thank you for supplying some feedback and asking questions.  I've been
> >> asking for suggestions and would very much like to have a discussion on
> >> how this issue is best solved.  I'm not attached to what I've provided.
> >> I'm just trying to get towards a solution.
> > 
> > Again, solve this in userspace, you have the information there at
> > runtime, why not use it?
> 
> We usually don't have the needed information if you remove the
> expectation that user space should parse the human readable portion of
> the error message.

I don't expect that userspace should have to parse any human readable
portion, if they don't want to.  But if you do want it to, it is pretty
trivial to parse what you have today:

	scsi 2:0:0:0: Direct-Access     Generic  STORAGE DEVICE   1531 PQ: 0 ANSI: 6

If you really have a unique identifier, then great, parse it today:

	usb 4-1.3.1: Product: USB3.0 Card Reader
	usb 4-1.3.1: Manufacturer: Generic
	usb 4-1.3.1: SerialNumber: 000000001531

What's keeping that from working now?

In fact, I would argue that it does seem to work, as there are many
commercial tools out there that seem to handle it just fine...

> >> We've looked at user space quite a bit and there is an inherit race
> >> condition with trying to fetch the unique hardware id for a message when
> >> it gets emitted from the kernel as udev rules haven't even run (assuming
> >> we even have the meta-data to make the association).
> > 
> > But one moment later you do have the information, so you can properly
> > correlate it, right?
> 
> We could have the information if all the storage paths went through
> dev_printk.  Here is what we get today when we encounter a read error
> which uses printk in the block layer:
> 
> {
>         "_HOSTNAME" : "pn",
>         "_TRANSPORT" : "kernel",
>         "__MONOTONIC_TIMESTAMP" : "1806379233",
>         "SYSLOG_IDENTIFIER" : "kernel",
>         "_SOURCE_MONOTONIC_TIMESTAMP" : "1805611354",
>         "SYSLOG_FACILITY" : "0",
>         "MESSAGE" : "blk_update_request: critical medium error, dev
> nvme0n1, sector 10000 op 0x0:(READ) flags 0x80700 phys_seg 3 prio class 0",
>         "PRIORITY" : "3",
>         "_MACHINE_ID" : "3f31a0847cea4c95b7a9cec13d07deeb",
>         "__REALTIME_TIMESTAMP" : "1601471260802301",
>         "_BOOT_ID" : "b03ed610f21d46ab8243a495ba5a0058",
>         "__CURSOR" :
> "s=a063a22bbb384da0b0412e8f652deabb;i=23c2;b=b03ed610f21d46ab8243a495ba5a0058;m=6bab28e1;t=5b087959e3cfd;x=20528862f8f765c9"
> }

Ok, messy stuff, don't do that :)

> Unless you parse the message text you cannot make the association.  If
> the same message was changed to dev_printk we would get:
> 
> 
> {
>         "__REALTIME_TIMESTAMP" : "1589401901093443",
>         "__CURSOR" :
> "s=caac9703b34a48fd92f7875adae55a2f;i=1c713;b=e2ae14a9def345aa803a13648b95429c;m=7d25b4f;t=5a58d77b85243;x=b034c2d3fb853870",
>         "SYSLOG_IDENTIFIER" : "kernel",
>         "_KERNEL_DEVICE" : "b259:917504",
>         "__MONOTONIC_TIMESTAMP" : "131226447",
>         "_UDEV_SYSNAME" : "nvme0n1",
>         "PRIORITY" : "3",
>         "_KERNEL_SUBSYSTEM" : "block",
>         "_SOURCE_MONOTONIC_TIMESTAMP" : "130941917",
>         "_TRANSPORT" : "kernel",
>         "_MACHINE_ID" : "3f31a0847cea4c95b7a9cec13d07deeb",
>         "_HOSTNAME" : "pn",
>         "SYSLOG_FACILITY" : "0",
>         "_BOOT_ID" : "e2ae14a9def345aa803a13648b95429c",
>         "_UDEV_DEVLINK" : [
>                 "/dev/disk/by-uuid/22fc262a-d621-452a-a951-7761d9fcf0dc",
>                 "/dev/disk/by-path/pci-0000:00:05.0-nvme-1",
> 
> "/dev/disk/by-id/nvme-nvme.8086-4445414442454546-51454d55204e564d65204374726c-00000001",
>                 "/dev/disk/by-id/nvme-QEMU_NVMe_Ctrl_DEADBEEF"
>         ],
>         "MESSAGE" : "block nvme0n1: blk_update_request: critical medium
> error, dev nvme0n1, sector 10000 op 0x0:(READ) flags 0x0 phys_seg 1 prio
> class 0",
>         "_UDEV_DEVNODE" : "/dev/nvme0n1"
> }

Great, you have a udev sysname, a kernel subsystem and a way to
associate that with a real device, what more are you wanting?

> Journald already knows how to utilize the dev_printk meta data.

And if you talk to the printk developers (which you seem to be keeping
out of the loop here), they are ripping out the meta data facility as
fast as possible.  So don't rely on extending that please.

> One idea that I've suggested along the way is creating a dev_printk
> function that doesn't change the message text.  We then avoid breaking
> people that are parsing.  Is this something that would be acceptable to
> folks?  It doesn't solve early boot where udev rules haven't even run,
> but it's better.

I still fail to understand the root problem here.

Ok, no, I think I understand what you think the problem is, I just don't
see why it is up to the kernel to change what we have today when there
are lots of tools out there working just fine without any kernel changes
needed.

So try explaining the problem as you see it please, so we all know where
to work from.

But again, cutting out the developers of the subsystems you were wanting
to modify might just be making me really grumpy about this whole
thing...

greg k-h
Tony Asleson Oct. 7, 2020, 8:10 p.m. UTC | #12
On 10/1/20 6:48 AM, Greg Kroah-Hartman wrote:
> On Wed, Sep 30, 2020 at 09:35:52AM -0500, Tony Asleson wrote:
>> On 9/30/20 2:38 AM, Greg Kroah-Hartman wrote:
>>> On Tue, Sep 29, 2020 at 05:04:32PM -0500, Tony Asleson wrote:
>>>> I'm trying to figure out a way to positively identify which storage
>>>> device an error belongs to over time.
>>>
>>> "over time" is not the kernel's responsibility.
>>>
>>> This comes up every 5 years or so. The kernel provides you, at runtime,
>>> a mapping between a hardware device and a "logical" device.  It can
>>> provide information to userspace about this mapping, but once that
>>> device goes away, the kernel is free to reuse that logical device again.
>>>
>>> If you want to track what logical devices match up to what physical
>>> device, then do it in userspace, by parsing the log files.
>>
>> I don't understand why people think it's acceptable to ask user space to
>> parse text that is subject to change.
> 
> What text is changing? The format of of the prefix of dev_*() is well
> known and has been stable for 15+ years now, right?  What is difficult
> in parsing it?

Many of the storage layer messages are using printk, not dev_printk.

>>>> Thank you for supplying some feedback and asking questions.  I've been
>>>> asking for suggestions and would very much like to have a discussion on
>>>> how this issue is best solved.  I'm not attached to what I've provided.
>>>> I'm just trying to get towards a solution.
>>>
>>> Again, solve this in userspace, you have the information there at
>>> runtime, why not use it?
>>
>> We usually don't have the needed information if you remove the
>> expectation that user space should parse the human readable portion of
>> the error message.
> 
> I don't expect that userspace should have to parse any human readable
> portion, if they don't want to.  But if you do want it to, it is pretty
> trivial to parse what you have today:
> 
> 	scsi 2:0:0:0: Direct-Access     Generic  STORAGE DEVICE   1531 PQ: 0 ANSI: 6
> 
> If you really have a unique identifier, then great, parse it today:
> 
> 	usb 4-1.3.1: Product: USB3.0 Card Reader
> 	usb 4-1.3.1: Manufacturer: Generic
> 	usb 4-1.3.1: SerialNumber: 000000001531
> 
> What's keeping that from working now?

I believe these examples are using dev_printk.  With dev_printk we don't
need to parse the text, we can use the meta data.

> In fact, I would argue that it does seem to work, as there are many
> commercial tools out there that seem to handle it just fine...

I'm trying to get something that's works for journalctl.

>>>> We've looked at user space quite a bit and there is an inherit race
>>>> condition with trying to fetch the unique hardware id for a message when
>>>> it gets emitted from the kernel as udev rules haven't even run (assuming
>>>> we even have the meta-data to make the association).
>>>
>>> But one moment later you do have the information, so you can properly
>>> correlate it, right?
>>
>> We could have the information if all the storage paths went through
>> dev_printk.  Here is what we get today when we encounter a read error
>> which uses printk in the block layer:
>>
>> {
>>         "_HOSTNAME" : "pn",
>>         "_TRANSPORT" : "kernel",
>>         "__MONOTONIC_TIMESTAMP" : "1806379233",
>>         "SYSLOG_IDENTIFIER" : "kernel",
>>         "_SOURCE_MONOTONIC_TIMESTAMP" : "1805611354",
>>         "SYSLOG_FACILITY" : "0",
>>         "MESSAGE" : "blk_update_request: critical medium error, dev
>> nvme0n1, sector 10000 op 0x0:(READ) flags 0x80700 phys_seg 3 prio class 0",
>>         "PRIORITY" : "3",
>>         "_MACHINE_ID" : "3f31a0847cea4c95b7a9cec13d07deeb",
>>         "__REALTIME_TIMESTAMP" : "1601471260802301",
>>         "_BOOT_ID" : "b03ed610f21d46ab8243a495ba5a0058",
>>         "__CURSOR" :
>> "s=a063a22bbb384da0b0412e8f652deabb;i=23c2;b=b03ed610f21d46ab8243a495ba5a0058;m=6bab28e1;t=5b087959e3cfd;x=20528862f8f765c9"
>> }
> 
> Ok, messy stuff, don't do that :)
> 
>> Unless you parse the message text you cannot make the association.  If
>> the same message was changed to dev_printk we would get:
>>
>>
>> {
>>         "__REALTIME_TIMESTAMP" : "1589401901093443",
>>         "__CURSOR" :
>> "s=caac9703b34a48fd92f7875adae55a2f;i=1c713;b=e2ae14a9def345aa803a13648b95429c;m=7d25b4f;t=5a58d77b85243;x=b034c2d3fb853870",
>>         "SYSLOG_IDENTIFIER" : "kernel",
>>         "_KERNEL_DEVICE" : "b259:917504",
>>         "__MONOTONIC_TIMESTAMP" : "131226447",
>>         "_UDEV_SYSNAME" : "nvme0n1",
>>         "PRIORITY" : "3",
>>         "_KERNEL_SUBSYSTEM" : "block",
>>         "_SOURCE_MONOTONIC_TIMESTAMP" : "130941917",
>>         "_TRANSPORT" : "kernel",
>>         "_MACHINE_ID" : "3f31a0847cea4c95b7a9cec13d07deeb",
>>         "_HOSTNAME" : "pn",
>>         "SYSLOG_FACILITY" : "0",
>>         "_BOOT_ID" : "e2ae14a9def345aa803a13648b95429c",
>>         "_UDEV_DEVLINK" : [
>>                 "/dev/disk/by-uuid/22fc262a-d621-452a-a951-7761d9fcf0dc",
>>                 "/dev/disk/by-path/pci-0000:00:05.0-nvme-1",
>>
>> "/dev/disk/by-id/nvme-nvme.8086-4445414442454546-51454d55204e564d65204374726c-00000001",
>>                 "/dev/disk/by-id/nvme-QEMU_NVMe_Ctrl_DEADBEEF"
>>         ],
>>         "MESSAGE" : "block nvme0n1: blk_update_request: critical medium
>> error, dev nvme0n1, sector 10000 op 0x0:(READ) flags 0x0 phys_seg 1 prio
>> class 0",
>>         "_UDEV_DEVNODE" : "/dev/nvme0n1"
>> }
> 
> Great, you have a udev sysname, a kernel subsystem and a way to
> associate that with a real device, what more are you wanting?

Did you miss in my example where it's currently a printk?  I showed what
it would look like if it was a dev_printk.

Journald is using _KERNEL_DEVICE to add the _UDEV_DEVLINK information to
the journal entry, it's not parsing the prefix of the message.

The above json is outputted from journalctl when you specify "-o
json-pretty".

>> Journald already knows how to utilize the dev_printk meta data.
> 
> And if you talk to the printk developers (which you seem to be keeping
> out of the loop here), they are ripping out the meta data facility as
> fast as possible.  So don't rely on extending that please.

Again, I'm not trying to keep anyone out of the loop.  Last I knew the
meta data capability wasn't being removed, maybe this has changed?

Ref.

https://lore.kernel.org/lkml/20191007120134.ciywr3wale4gxa6v@pathway.suse.cz/


>> One idea that I've suggested along the way is creating a dev_printk
>> function that doesn't change the message text.  We then avoid breaking
>> people that are parsing.  Is this something that would be acceptable to
>> folks?  It doesn't solve early boot where udev rules haven't even run,
>> but it's better.
> 
> I still fail to understand the root problem here.

IMHO one of the root problems is that many storage messages are still
using printk.  Changing messages to dev_printk has been met with resistance.


> Ok, no, I think I understand what you think the problem is, I just don't
> see why it is up to the kernel to change what we have today when there
> are lots of tools out there working just fine without any kernel changes
> needed.
> 
> So try explaining the problem as you see it please, so we all know where
> to work from.

To me the problem today is the kernel logs information to identify how a
storage device is attached, not what is attached.  I think you agree
with this statement.  The log information is not helpful without the
information to correlate to the actual device.  I think you also agree
with this too as you have mentioned it's user space's responsibility to
collect this so that the correlation can be done.

If the following are *both* true, we have a usable message that has the
correlated data with it in the journal.

1. The storage related kernel message goes through dev_printk
2. At the time of the message being emitted the device symlinks are present.

When those two things are both true, journalctl can do the following
(today):

$ journalctl /dev/disk/by-id/wwn-0x5002538844584d30


However, it usually can't because the above two things are rarely both
true at the same time for a given message for journald when it logs to
the journal.

You keep saying this is a user space issue, but I believe we still need
a bit of help from the kernel at the very least by migrating to
dev_printk or something similar that adds the same meta data without
changing the message text.

Yes, my patch series went one step further and added the device ID as
structured data to the log message, but I was also trying to minimize
the race condition between the kernel emitting a message and journald
not having the information to associate it to the hardware device.

If people have other suggestions please let them be known.

> But again, cutting out the developers of the subsystems you were wanting
> to modify might just be making me really grumpy about this whole
> thing...

Again, I'm sorry I didn't reach out to the correct people.  Hopefully
I've CC'd everyone that is appropriate for this discussion.


Thanks,
Tony
Greg Kroah-Hartman Oct. 8, 2020, 4:48 a.m. UTC | #13
On Wed, Oct 07, 2020 at 03:10:17PM -0500, Tony Asleson wrote:
> On 10/1/20 6:48 AM, Greg Kroah-Hartman wrote:
> > On Wed, Sep 30, 2020 at 09:35:52AM -0500, Tony Asleson wrote:
> >> On 9/30/20 2:38 AM, Greg Kroah-Hartman wrote:
> >>> On Tue, Sep 29, 2020 at 05:04:32PM -0500, Tony Asleson wrote:
> >>>> I'm trying to figure out a way to positively identify which storage
> >>>> device an error belongs to over time.
> >>>
> >>> "over time" is not the kernel's responsibility.
> >>>
> >>> This comes up every 5 years or so. The kernel provides you, at runtime,
> >>> a mapping between a hardware device and a "logical" device.  It can
> >>> provide information to userspace about this mapping, but once that
> >>> device goes away, the kernel is free to reuse that logical device again.
> >>>
> >>> If you want to track what logical devices match up to what physical
> >>> device, then do it in userspace, by parsing the log files.
> >>
> >> I don't understand why people think it's acceptable to ask user space to
> >> parse text that is subject to change.
> > 
> > What text is changing? The format of of the prefix of dev_*() is well
> > known and has been stable for 15+ years now, right?  What is difficult
> > in parsing it?
> 
> Many of the storage layer messages are using printk, not dev_printk.

Ok, then stop right there.  Fix that up.  Don't try to route around the
standard way of displaying log messages by creating a totally different
way of doing things.

Just use the dev_*() calls, and all will be fine.  Kernel log messages
are not "ABI" in that they have to be preserved in any specific way, so
adding a prefix to them as dev_*() does, will be fine.

thanks,

greg k-h
Hannes Reinecke Oct. 8, 2020, 5:54 a.m. UTC | #14
On 10/7/20 10:10 PM, Tony Asleson wrote:
> On 10/1/20 6:48 AM, Greg Kroah-Hartman wrote:
>> On Wed, Sep 30, 2020 at 09:35:52AM -0500, Tony Asleson wrote:
>>> On 9/30/20 2:38 AM, Greg Kroah-Hartman wrote:
>>>> On Tue, Sep 29, 2020 at 05:04:32PM -0500, Tony Asleson wrote:
>>>>> I'm trying to figure out a way to positively identify which storage
>>>>> device an error belongs to over time.
>>>>
>>>> "over time" is not the kernel's responsibility.
>>>>
>>>> This comes up every 5 years or so. The kernel provides you, at runtime,
>>>> a mapping between a hardware device and a "logical" device.  It can
>>>> provide information to userspace about this mapping, but once that
>>>> device goes away, the kernel is free to reuse that logical device again.
>>>>
>>>> If you want to track what logical devices match up to what physical
>>>> device, then do it in userspace, by parsing the log files.
>>>
>>> I don't understand why people think it's acceptable to ask user space to
>>> parse text that is subject to change.
>>
>> What text is changing? The format of of the prefix of dev_*() is well
>> known and has been stable for 15+ years now, right?  What is difficult
>> in parsing it?
> 
> Many of the storage layer messages are using printk, not dev_printk.
> 
So that would be the immediate angle of attack ...

>>>>> Thank you for supplying some feedback and asking questions.  I've been
>>>>> asking for suggestions and would very much like to have a discussion on
>>>>> how this issue is best solved.  I'm not attached to what I've provided.
>>>>> I'm just trying to get towards a solution.
>>>>
>>>> Again, solve this in userspace, you have the information there at
>>>> runtime, why not use it?
>>>
>>> We usually don't have the needed information if you remove the
>>> expectation that user space should parse the human readable portion of
>>> the error message.
>>
>> I don't expect that userspace should have to parse any human readable
>> portion, if they don't want to.  But if you do want it to, it is pretty
>> trivial to parse what you have today:
>>
>> 	scsi 2:0:0:0: Direct-Access     Generic  STORAGE DEVICE   1531 PQ: 0 ANSI: 6
>>
>> If you really have a unique identifier, then great, parse it today:
>>
>> 	usb 4-1.3.1: Product: USB3.0 Card Reader
>> 	usb 4-1.3.1: Manufacturer: Generic
>> 	usb 4-1.3.1: SerialNumber: 000000001531
>>
>> What's keeping that from working now?
> 
> I believe these examples are using dev_printk.  With dev_printk we don't
> need to parse the text, we can use the meta data.
> So it looks as most of your usecase would be solved by moving to 
dev_printk().
Why not work on that instead?
I do presume this will have immediate benefits for everybody, and will 
have approval from everyone.

Cheers,

Hannes
Finn Thain Oct. 8, 2020, 6:22 a.m. UTC | #15
On Wed, 7 Oct 2020, Tony Asleson wrote:

> The log information is not helpful without the information to correlate 
> to the actual device.

Log messages that associate one entity with another can be generated 
whenever such an association comes into existence, which is probably when 
devices get probed.

E.g. a host:channel:target:lun identifier gets associated with a block 
device name by the dev_printk() calls in sd_probe():

[    3.600000] sd 0:0:0:0: [sda] Attached SCSI disk

BTW, if you think of {"0:0:0:0","sda"} as a row in some normalized table 
and squint a bit, this problem is not unlike the replication of database 
tables over a message queue.
Martin K. Petersen Oct. 8, 2020, 8:49 p.m. UTC | #16
Greg,

>> > What text is changing? The format of of the prefix of dev_*() is well
>> > known and has been stable for 15+ years now, right?  What is difficult
>> > in parsing it?
>> 
>> Many of the storage layer messages are using printk, not dev_printk.
>
> Ok, then stop right there.  Fix that up.  Don't try to route around the
> standard way of displaying log messages by creating a totally different
> way of doing things.

Couldn't agree more!