Re: linux-next: build failure after merge of the nfsd tree
From: J. Bruce Fields
Date: Mon Apr 29 2013 - 13:59:15 EST
On Mon, Apr 29, 2013 at 01:47:16PM -0400, Chuck Lever wrote:
>
> On Apr 29, 2013, at 1:38 PM, "J. Bruce Fields" <bfields@xxxxxxxxxxxx> wrote:
>
> > On Mon, Apr 29, 2013 at 01:04:01PM -0400, Chuck Lever wrote:
> >>
> >> On Apr 29, 2013, at 12:21 PM, Trond Myklebust
> >> <trond.myklebust@xxxxxxxxxx> wrote:
> >>
> >>> On Mon, 2013-04-29 at 12:05 -0400, Chuck Lever wrote:
> >>>> On Apr 29, 2013, at 11:45 AM, "J. Bruce Fields"
> >>>> <bfields@xxxxxxxxxxxx> wrote:
> >>>>
> >>>>> On Mon, Apr 29, 2013 at 10:53:37AM -0400, Chuck Lever wrote:
> >>>>>>
> >>>>>> On Apr 28, 2013, at 9:24 PM, Stephen Rothwell
> >>>>>> <sfr@xxxxxxxxxxxxxxxx> wrote:
> >>>>>>
> >>>>>>> Hi J.,
> >>>>>>>
> >>>>>>> After merging the nfsd tree, today's linux-next build (powerpc
> >>>>>>> ppc64_defconfig) failed like this:
> >>>>>>>
> >>>>>>> net/sunrpc/auth_gss/svcauth_gss.c: In function
> >>>>>>> 'gss_proxy_save_rsc': net/sunrpc/auth_gss/svcauth_gss.c:1182:3:
> >>>>>>> error: implicit declaration of function 'gss_mech_get_by_OID'
> >>>>>>> [-Werror=implicit-function-declaration]
> >>>>>>>
> >>>>>>> Caused byc ommit 030d794bf498 ("SUNRPC: Use gssproxy upcall for
> >>>>>>> server RPCGSS authentication"). gss_mech_get_by_OID() made
> >>>>>>> static to net/sunrpc/auth_gss/gss_mech_switch.c by commit
> >>>>>>> 9568c5e9a61d ("SUNRPC: Introduce rpcauth_get_pseudoflavor()") in
> >>>>>>> the nfs tree (part of the nfs tree that you did not merge).
> >>>>>>>
> >>>>>>> I don't know how to fix this, so I have used the nfsd tree from
> >>>>>>> next-20130426 for today.
> >>>>>>
> >>>>>> Bruce, it might make sense for me to submit the three server-side
> >>>>>> RPC GSS patches, and then you can rebase the gssproxy work on top
> >>>>>> of those. Let me know how you would like to proceed.
> >>>>>
> >>>>> I'm happy to take those patches whenever you consider them ready.
> >>>>> Would that fix the problem?
> >>>>
> >>>> Someone would need to modify the gssproxy patches to use the new
> >>>> interfaces.
> >>>>
> >>>>> Also: it looks like 030d794bf498 "SUNRPC: Introduce
> >>>>> rpcauth_get_pseudoflavor()" is in Trond's linux-next, but not his
> >>>>> nfs-for-next. I'm not sure what that means--is it safe to rebase
> >>>>> on top of *that*?
> >>>>
> >>>> That doesn't seem right to me.
> >>>
> >>> I've now pulled the rpcsec_gss changes into the nfs-for-next. The
> >>> main reason why they were not pulled in earlier was due to
> >>> uncertainty what to do about the increase in "AUTH_GSS upcall timed
> >>> out." syslog warnings.
> >>
> >> Trond's nfs-for-next now has the new rpcauth_get_gssinfo() and
> >> rpcauth_get_pseudoflavor() APIs, which are replacements for direct
> >> calls into the GSS mech switch. These APIs are a little more generic,
> >> and more robust in the face of unloaded GSS kernel modules.
> >>
> >> Instead of gss_mech_get_by_OID(), I suspect you want
> >> rpcauth_get_pseudoflavor(), but I haven't looked at the gssproxy code.
> >
> > It's doing
> >
> > status = -EOPNOTSUPP;
> > gm = gss_mech_get_by_OID(&ud->mech_oid);
> > if (!gm)
> > goto out;
> > status = -EINVAL;
> > status = gss_import_sec_context(ud->out_handle.data,
> > ud->out_handle.len,
> > gm, &rsci.mechctx,
> > &expiry, GFP_KERNEL);
> > if (status)
> > goto out;
> >
> > So we need a way to get from an OID and some mechanism-specific data to
> > a context.
> >
> > Looks to me like we just want to re-export gss_mech_get_by_OID().
>
> The reason for the new wrappers is to load the kernel modules properly before trying to discover the OID -> mechanism mapping.
>
> Where are you calling it from? If it's from outside of the GSS module, how do you guarantee the rpc_gss_auth module is loaded? What if the GSS mechanism for that OID isn't loaded?
Sorry, I should have said just "remove static from", not
"re-export"--it's in the same module. So there should be no problem
here, right?
--b.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/