Tag Archives: TypeError: exceptions must derive from BaseException

In the process of learning to throw an exception, the reason for the error message typeerror: exceptions must derive from baseexception

With multi-dimensional model as the core, let the factory digital transformation and upgrading “within reach”>>>

in the process of learning to throw an exception, the reason for the error message typeerror: exceptions must derive from baseexception is given

Reference article:

(1) In the process of learning to throw an exception, the reason for the error message typeerror: exceptions must derive from baseexception

(2) https://www.cnblogs.com/bidepanpan/p/7115153.html

Let’s make a note.

PyTorch :TypeError: exceptions must derive from BaseException

pytorch error: typeerror: exceptions must derive from baseexception

In fact, it’s a low-level mistake. I think it’s because I didn’t find the carrier to run. Take your own code as an example:

I’m in base_ The parameters of netg can only be selected from options.py

self.parser.add_argument('--netG', type=str, default='p2hed', choices=['p2hed', 'refineD', 'p2hed_att'], help='selects model to use for netG')

But when I choose netg, the code is as follows:

def define_G(input_nc, output_nc, ngf, netG, n_downsample_global=3, n_blocks_global=9, n_local_enhancers=1, 
             n_blocks_local=3, norm='instance', gpu_ids=[]):    
    norm_layer = get_norm_layer(norm_type=norm)     
    if netG == 'p2hed':    
        netG = DDNet_p2hED(input_nc, output_nc, ngf, n_downsample_global, n_blocks_global, norm_layer)
    elif netG == 'refineDepth':
        netG = DDNet_RefineDepth(input_nc, output_nc, ngf, n_downsample_global, n_blocks_global, n_local_enhancers, n_blocks_local, norm_layer)
    elif netG == 'p2h_noatt':        
        netG = DDNet_p2hed_noatt(input_nc, output_nc, ngf, n_downsample_global, n_blocks_global, n_local_enhancers, n_blocks_local, norm_layer)
    else:
        raise('generator not implemented!')
    #print(netG)
    if len(gpu_ids) > 0:
        assert(torch.cuda.is_available())   
        netG.cuda(gpu_ids[0])
    netG.apply(weights_init)
    return netG

Note that there is no “rfined” option here, so when running the code, the program can’t find the netg which network to choose, so an error is reported

In fact, just change the “elif netg = =’refined depth ‘:” in the above code to “elif netg = =’refined’:”