Earlier this week, the White House released a memo containing a set of 10 principles for agencies to consider when developing a regulatory approach to artificial intelligence.

Focused squarely on "narrow" AI—or AI that can "learn and perform domain-specific or specialized tasks by extracting information from data sets, or other structured or unstructured sources of information"—the principles address several of the concerns that have been raised around the technology, including transparency of use and the potential for bias and discriminatory outcomes.

Still, the memo could ultimately be of more use to AI developers than anyone hoping that some kind of federal regulation targeting the technology is on the immediate horizon. Jarno Vanto, a partner at Crowell & Moring, thinks developers have needed some guidance on how AI in the U.S. could be regulated.

"This is a step in that direction and that's a good thing," Vanto said.

But is it the right direction? The individual principles themselves appear to be drawn in broad strokes, with concerns related to how AI could potentially impact privacy or civil liberties, for example, grouped under the pillar of "Public Trust in AI."

Dan Broderick, CEO and co-founder of the AI-assisted contract review tool BlackBoiler, argued that none of the principles included in the memo were specific to AI or to technology in general and could instead apply to almost any other industry.

"The AI principles issued by the White House are very much in line with the current administration's stance on deregulation, where the executive branch is essentially saying to federal agencies, 'think long and hard before you decide to regulate artificial intelligence' so as not to stifle innovation or economic development in this area," Broderick wrote in an email.

He also noted that the data privacy and data protection aspects of AI were "largely unaddressed here." Still, having these principles exist at a broader level may not ultimately hinder any agency or larger federal regulations moving forward.

While Ryan Steadman, chief revenue officer of the AI-powered email management company Zero, thinks there may be one or two principles missing from the list of principles—a sense of geopolitical and cross-border collaboration, for example—he also pointed out that the process of building guidelines around AI has to start somewhere.

The task of refining those principles and eliminating any gaps that may still exist will ultimately fall into the hands of regulators.

"It's going to really be down to the federal agencies who have the subject matter expertise and domain expertise to know where those cracks are and what to look out for," Steadman said.

Vanto at Crowell & Moring also believes regulations around AI are likely to occur at the agency level, citing the "contentious" nature of the federal legislature as something that may push any major across-the-board laws specific towards AI a long ways off.

In the interim, are tech developers likely to actually take notice of the White House's principles? Steadman thinks so.

"I think they will. Whether they'll care about it or not is probably the question behind the question," he said.