Re: [git pull] drm next tree
From: Jerome Glisse
Date: Wed Mar 23 2011 - 10:40:00 EST
On Wed, Mar 23, 2011 at 8:21 AM, Stephen Clark <sclark46@xxxxxxxxxxxxx> wrote:
> On 03/22/2011 10:19 PM, Linus Torvalds wrote:
>> So I had hoped - yes, very naïve of me, I know - that this merge
>> window would be different.
>> But it's not.
>> On Wed, Mar 16, 2011 at 9:09 PM, Dave Airlie<airlied@xxxxxxxx> wrote:
>>> i915: big 855 fix, lots of output setup refactoring, lots of misc fixes.
>> .. and apparently a lot of breakage too. My crappy laptop that I abuse
>> for travel is - once more - broken by the updates. I cannot suspend
>> and resume, because every resume seems to fail.
>> One of the more useful failures was:
>> [ 61.656055] [drm:i915_hangcheck_elapsed] *ERROR* Hangcheck timer
>> elapsed... GPU hung
>> [ 61.656079] [drm] capturing error event; look for more information
>> in /debug/dri/0/i915_error_state
>> [ 61.664387] [drm:i915_wait_request] *ERROR* i915_wait_request
>> returns -11 (awaiting 2 at 0, next 3)
>> and I'm attaching the error_state file from that particular case here.
>> In other cases it seems to just hang entirely.
>> Keith/Jesse/Chris - I don't know that it's i915, and it will take
>> forever to bisect (I'll try). But it does seem pretty likely.
> Why can't the gpu be reset/restarted when this happens? When a nic card gets
> hung it is reinitialized
> and restarted why not the gpu?
GPU are so complex, i know case where reseting a GPU would lead to
bring down the PCI and the CPU with it (basicly the reset clear some
of the GPU memory controller bit but not the GPU PCI request queue, so
after/while reseting the GPU trigger a several request to bogus
address on the bus, then trigger a double fault and eventually a CPU
shutdown) . Of course here we can blame the hw designer for not having
a proper reset.
All this vary from one GPU to another, it seems that reset have become
more reliable on newer hw.
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/