The question of whether artificial intelligence should be permitted to sound like a lawyer and what would happen if it did was being discussed by lawmakers.
Kristen Gonzalez’s introduction of New York Senate Bill S7263 is the main topic of discussion. The bill would forbid AI chatbots from posing as licensed professionals or providing what it refers to as “substantive” legal advice. More startlingly, it would enable users who depend on such guidance to file lawsuits against the businesses that provide those tools. The proposal feels less symbolic and more like a line drawn in wet cement in a time when AI platforms can solve everything from homework problems to health concerns.
There is a feeling that lawmakers are reacting to actual embarrassment as well as potential threats. Attorneys have already been penalized by courts nationwide for submitting briefs that contain artificial intelligence-generated case citations. In one well-known case, a pro se litigant allegedly flooded a federal docket with dubious filings by using an AI tool to reopen a settled dispute. It’s easy to understand why regulators are concerned when you see the accumulation of these incidents.
| Category | Details |
|---|---|
| Proposed Legislation | New York Senate Bill S7263 |
| Sponsor | Kristen Gonzalez |
| Jurisdiction | New York |
| Focus | Prohibits AI chatbots from impersonating licensed professionals or giving unauthorized legal advice |
| Key Provision | Private right of action allowing users to sue AI platform owners |
| Relevant Industry | Legal Tech, Generative AI |
| Reference Website | https://www.nysenate.gov/legislation/bills/2025/S7263 |
However, the bill comes at a time when the legal sector is changing. The billable hour, which charges clients in six-minute increments for research, drafting, and review, has shaped law firm economics for decades. These days, AI systems can draft contracts in minutes, summarize cases in seconds, and provide nearly instantaneous answers to procedural queries. Investors appear to think that this automation will change legal services by making them quicker and less expensive. While warning clients not to rely solely on AI, some attorneys openly acknowledge that they already use it to expedite their work.
AI is not completely prohibited by the proposed legislation. It focuses on impersonation, where chatbots pose as licensed lawyers or provide advice that would be considered unlawful practice of law if provided by a human without credentials. The bill goes on to say that businesses cannot just use disclaimers to protect themselves. Under this framework, informing users that they are speaking with a “non-human chatbot” would be insufficient.
This might have more to do with professional boundaries than technology. Like all states, New York has stringent laws governing who is allowed to practice law. In addition to protecting consumers, the licensing system also safeguards the profession. Opponents contend that the bill’s wording is sufficiently vague to prevent regular people from using AI to comprehend their own legal circumstances. It might restrict self-service tools that assist tenants in researching their housing rights or small businesses in drafting contracts if it is interpreted aggressively.
It was difficult to ignore the disparity last week as I stood outside a Midtown Manhattan courthouse and watched lawyers walk past self-represented litigants holding folders of paperwork while wearing fitted suits. Legal services are still costly. Hiring a lawyer is just not an option for many people. Despite their flaws, AI chatbots have started to fill that gap by providing explanations, describing processes, and occasionally providing step-by-step instructions. Another question is whether or not that advice is trustworthy.
The bill’s proponents contend that false information can actually be harmful. Senator Gonzalez has referenced documented instances of AI chatbots providing false legal information or fabricating professional credentials. According to this viewpoint, permitting legal action against AI platforms fosters accountability and compels businesses to create safer systems. That makes sense. Technology firms frequently act swiftly, releasing tools before the full legal ramifications are known.
Skepticism persists, though. The proposal is seen by some legal technologists as protectionism disguised in terms of consumer safety. Why shouldn’t people use similar tools to educate themselves if licensed attorneys can use AI internally to draft documents or conduct research? It’s still unclear if the bill would strictly prohibit outright impersonation or limit more general informational uses.
The conflict is a reflection of broader national discussions. Lawsuits against OpenAI, Google, and other AI developers range from wrongful death claims related to chatbot interactions to defamation. As technology advances in real time, regulators are rushing to keep up, creating regulations. As this develops, it seems like lawmakers are erecting guardrails on a highway that is still being built.
The ramifications are subtle but significant for law firms. The traditional lawyer-client relationship may continue to be the gatekeeper for legal services in New York if AI is unable to provide direct advice to consumers. For the time being, this keeps the billable hour intact. However, customers are demanding efficiency more and more. They are aware that AI can write a memo in a matter of seconds. Paying thousands of dollars for work that seems automated might not be acceptable to them.
As the bill moves through committee stages, its future is still up in the air. Its scope could be limited by amendments that make it clear that while impersonation is prohibited, general legal information is. Alternatively, it might pass mainly unaltered, establishing a model that other states could emulate.
Meanwhile, a resident in a small Buffalo office or an apartment in Queens is probably asking a chatbot a legal question in an attempt to get clarification. The response is provided promptly, confidently, and at no cost. It’s unclear if that exchange will soon be prohibited or just subject to regulations.
It’s possible that the billable hour will end without much fanfare. It might arrive covertly, shaped by cautious legislation and negotiated in committee rooms. Alternatively, it might stall due to a profession that is committed to protecting its boundaries. In any case, the discussion has started and won’t stop.

